Xu, Ying; Cohen Hubal, Elaine A.; Little, John C.
2010-01-01
Background Because of the ubiquitous nature of phthalates in the environment and the potential for adverse human health effects, an urgent need exists to identify the most important sources and pathways of exposure. Objectives Using emissions of di(2-ethylhexyl) phthalate (DEHP) from vinyl flooring (VF) as an illustrative example, we describe a fundamental approach that can be used to identify the important sources and pathways of exposure associated with phthalates in indoor material. Methods We used a three-compartment model to estimate the emission rate of DEHP from VF and the evolving exposures via inhalation, dermal absorption, and oral ingestion of dust in a realistic indoor setting. Results A sensitivity analysis indicates that the VF source characteristics (surface area and material-phase concentration of DEHP), as well as the external mass-transfer coefficient and ventilation rate, are important variables that influence the steady-state DEHP concentration and the resulting exposure. In addition, DEHP is sorbed by interior surfaces, and the associated surface area and surface/air partition coefficients strongly influence the time to steady state. The roughly 40-fold range in predicted exposure reveals the inherent difficulty in using biomonitoring to identify specific sources of exposure to phthalates in the general population. Conclusions The relatively simple dependence on source and chemical-specific transport parameters suggests that the mechanistic modeling approach could be extended to predict exposures arising from other sources of phthalates as well as additional sources of other semivolatile organic compounds (SVOCs) such as biocides and flame retardants. This modeling approach could also provide a relatively inexpensive way to quantify exposure to many of the SVOCs used in indoor materials and consumer products. PMID:20123613
Davis, Jonathan H.
2015-03-09
Future multi-tonne Direct Detection experiments will be sensitive to solar neutrino induced nuclear recoils which form an irreducible background to light Dark Matter searches. Indeed for masses around 6 GeV the spectra of neutrinos and Dark Matter are so similar that experiments are said to run into a neutrino floor, for which sensitivity increases only marginally with exposure past a certain cross section. In this work we show that this floor can be overcome using the different annual modulation expected from solar neutrinos and Dark Matter. Specifically for cross sections below the neutrino floor the DM signal is observable through a phase shift and a smaller amplitude for the time-dependent event rate. This allows the exclusion power to be improved by up to an order of magnitude for large exposures. In addition we demonstrate that, using only spectral information, the neutrino floor exists over a wider mass range than has been previously shown, since the large uncertainties in the Dark Matter velocity distribution make the signal spectrum harder to distinguish from the neutrino background. However for most velocity distributions it can still be surpassed using timing information, and so the neutrino floor is not an absolute limit on the sensitivity of Direct Detection experiments.
Uncertainty and Sensitivity Analyses Plan
Simpson, J.C.; Ramsdell, J.V. Jr.
1993-04-01
Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.
Sensitivity and Uncertainty Analysis Shell
Energy Science and Technology Software Center (ESTSC)
1999-04-20
SUNS (Sensitivity and Uncertainty Analysis Shell) is a 32-bit application that runs under Windows 95/98 and Windows NT. It is designed to aid in statistical analyses for a broad range of applications. The class of problems for which SUNS is suitable is generally defined by two requirements: 1. A computer code is developed or acquired that models some processes for which input is uncertain and the user is interested in statistical analysis of the outputmore » of that code. 2. The statistical analysis of interest can be accomplished using the Monte Carlo analysis. The implementation then requires that the user identify which input to the process model is to be manipulated for statistical analysis. With this information, the changes required to loosely couple SUNS with the process model can be completed. SUNS is then used to generate the required statistical sample and the user-supplied process model analyses the sample. The SUNS post processor displays statistical results from any existing file that contains sampled input and output values.« less
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2011-09-01
Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2008-09-01
This report presents the forward sensitivity analysis method as a means for quantification of uncertainty in system analysis. The traditional approach to uncertainty quantification is based on a “black box” approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. This approach requires large number of simulation runs and therefore has high computational cost. Contrary to the “black box” method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In this approach equations for the propagation of uncertainty are constructed and the sensitivity is solved for as variables in the same simulation. This “glass box” method can generate similar sensitivity information as the above “black box” approach with couples of runs to cover a large uncertainty region. Because only small numbers of runs are required, those runs can be done with a high accuracy in space and time ensuring that the uncertainty of the physical model is being measured and not simply the numerical error caused by the coarse discretization. In the forward sensitivity method, the model is differentiated with respect to each parameter to yield an additional system of the same size as the original one, the result of which is the solution sensitivity. The sensitivity of any output variable can then be directly obtained from these sensitivities by applying the chain rule of differentiation. We extend the forward sensitivity method to include time and spatial steps as special parameters so that the numerical errors can be quantified against other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty analysis. By knowing the relative sensitivity of time and space steps with other
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2013-01-01
This paper presents the extended forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed to run at optimized time and space steps without affecting the confidence of the physical parameter sensitivity results. The time and space steps forward sensitivity analysis method can also replace the traditional time step and grid convergence study with much less computational cost. Two well-defined benchmark problems with manufactured solutions are utilized to demonstrate the method.
LCA data quality: sensitivity and uncertainty analysis.
Guo, M; Murphy, R J
2012-10-01
Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions. PMID:22854094
Uncertainty Quantification of Equilibrium Climate Sensitivity
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Brandon, S. T.; Covey, C. C.; Domyancic, D. M.; Johannesson, G.; Klein, R.; Tannahill, J.; Zhang, Y.
2011-12-01
Significant uncertainties exist in the temperature response of the climate system to changes in the levels of atmospheric carbon dioxide. We report progress to quantify the uncertainties of equilibrium climate sensitivity using perturbed parameter ensembles of the Community Earth System Model (CESM). Through a strategic initiative at the Lawrence Livermore National Laboratory, we have been developing uncertainty quantification (UQ) methods and incorporating them into a software framework called the UQ Pipeline. We have applied this framework to generate a large number of ensemble simulations using Latin Hypercube and other schemes to sample up to three dozen uncertain parameters in the atmospheric (CAM) and sea ice (CICE) model components of CESM. The parameters sampled are related to many highly uncertain processes, including deep and shallow convection, boundary layer turbulence, cloud optical and microphysical properties, and sea ice albedo. An extensive ensemble database comprised of more than 46,000 simulated climate-model-years of recent climate conditions has been assembled. This database is being used to train surrogate models of CESM responses and to perform statistical calibrations of the CAM and CICE models given observational data constraints. The calibrated models serve as a basis for propagating uncertainties forward through climate change simulations using a slab ocean model configuration of CESM. This procedure is being used to quantify the probability density function of equilibrium climate sensitivity accounting for uncertainties in climate model processes. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013. (LLNL-ABS-491765)
Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses
Hansen, Clifford W.; Martin, Curtis E.
2015-08-01
We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.
Uncertainty and Sensitivity in Surface Dynamics Modeling
NASA Astrophysics Data System (ADS)
Kettner, Albert J.; Syvitski, James P. M.
2016-05-01
Papers for this special issue on 'Uncertainty and Sensitivity in Surface Dynamics Modeling' heralds from papers submitted after the 2014 annual meeting of the Community Surface Dynamics Modeling System or CSDMS. CSDMS facilitates a diverse community of experts (now in 68 countries) that collectively investigate the Earth's surface-the dynamic interface between lithosphere, hydrosphere, cryosphere, and atmosphere, by promoting, developing, supporting and disseminating integrated open source software modules. By organizing more than 1500 researchers, CSDMS has the privilege of identifying community strengths and weaknesses in the practice of software development. We recognize, for example, that progress has been slow on identifying and quantifying uncertainty and sensitivity in numerical modeling of earth's surface dynamics. This special issue is meant to raise awareness for these important subjects and highlight state-of-the-art progress.
Temperature targets revisited under climate sensitivity uncertainty
NASA Astrophysics Data System (ADS)
Neubersch, Delf; Roth, Robert; Held, Hermann
2015-04-01
While the 2° target has become an official goal of the COP (Conference of the Parties) process recent work has shown that it requires re-interpretation if climate sensitivity uncertainty in combination with anticipated future learning is considered (Schmidt et al., 2011). A strict probabilistic limit as suggested by the Copenhagen diagnosis may lead to conceptual flaws in view of future learning such a negative expected value of information or even ill-posed policy recommendations. Instead Schmidt et al. suggest trading off the probabilistic transgression of a temperature target against mitigation-induced welfare losses and call this procedure cost risk analysis (CRA). Here we spell out CRA for the integrated assessment model MIND and derive necessary conditions for the exact nature of that trade-off. With CRA at hand it is for the first time that the expected value of climate information, for a given temperature target, can meaningfully be assessed. When focusing on a linear risk function as the most conservative of all possible risk functions, we find that 2° target-induced mitigation costs could be reduced by up to 1/3 if the climate response to carbon dioxide emissions were known with certainty, amounting to hundreds of billions of Euros per year (Neubersch et al., 2014). Further benefits of CRA over strictly formulated temperature targets are discussed. References: D. Neubersch, H. Held, A. Otto, Operationalizing climate targets under learning: An application of cost-risk analysis, Climatic Change, 126 (3), 305-318, DOI 10.1007/s10584-014-1223-z (2014). M. G. W. Schmidt, A. Lorenz, H. Held, E. Kriegler, Climate Targets under Uncertainty: Challenges and Remedies, Climatic Change Letters, 104 (3-4), 783-791, DOI 10.1007/s10584-010-9985-4 (2011).
Zwermann, W.; Krzykacz-Hausmann, B.; Gallner, L.; Klein, M.; Pautz, A.; Velkov, K.
2012-07-01
Sampling based uncertainty and sensitivity analyses due to epistemic input uncertainties, i.e. to an incomplete knowledge of uncertain input parameters, can be performed with arbitrary application programs to solve the physical problem under consideration. For the description of steady-state particle transport, direct simulations of the microscopic processes with Monte Carlo codes are often used. This introduces an additional source of uncertainty, the aleatoric sampling uncertainty, which is due to the randomness of the simulation process performed by sampling, and which adds to the total combined output sampling uncertainty. So far, this aleatoric part of uncertainty is minimized by running a sufficiently large number of Monte Carlo histories for each sample calculation, thus making its impact negligible as compared to the impact from sampling the epistemic uncertainties. Obviously, this process may cause high computational costs. The present paper shows that in many applications reliable epistemic uncertainty results can also be obtained with substantially lower computational effort by performing and analyzing two appropriately generated series of samples with much smaller number of Monte Carlo histories each. The method is applied along with the nuclear data uncertainty and sensitivity code package XSUSA in combination with the Monte Carlo transport code KENO-Va to various critical assemblies and a full scale reactor calculation. It is shown that the proposed method yields output uncertainties and sensitivities equivalent to the traditional approach, with a high reduction of computing time by factors of the magnitude of 100. (authors)
Techniques to quantify the sensitivity of deterministic model uncertainties
Ishigami, T. ); Cazzoli, E. . Nuclear Energy Dept.); Khatib-Rahbar ); Unwin, S.D. )
1989-04-01
Several existing methods for the assessment of the sensitivity of output uncertainty distributions generated by deterministic computer models to the uncertainty distributions assigned to the input parameters are reviewed and new techniques are proposed. Merits and limitations of the various techniques are examined by detailed application to the suppression pool aerosol removal code (SPARC).
Sensitivity analysis for handling uncertainty in an economic evaluation.
Limwattananon, Supon
2014-05-01
To meet updated international standards, this paper revises the previous Thai guidelines for conducting sensitivity analyses as part of the decision analysis model for health technology assessment. It recommends both deterministic and probabilistic sensitivity analyses to handle uncertainty of the model parameters, which are best represented graphically. Two new methodological issues are introduced-a threshold analysis of medicines' unit prices for fulfilling the National Lists of Essential Medicines' requirements and the expected value of information for delaying decision-making in contexts where there are high levels of uncertainty. Further research is recommended where parameter uncertainty is significant and where the cost of conducting the research is not prohibitive. PMID:24964700
Uncertainty and sensitivity analysis and its applications in OCD measurements
NASA Astrophysics Data System (ADS)
Vagos, Pedro; Hu, Jiangtao; Liu, Zhuan; Rabello, Silvio
2009-03-01
This article describes an Uncertainty & Sensitivity Analysis package, a mathematical tool that can be an effective time-shortcut for optimizing OCD models. By including real system noises in the model, an accurate method for predicting measurements uncertainties is shown. The assessment, in an early stage, of the uncertainties, sensitivities and correlations of the parameters to be measured drives the user in the optimization of the OCD measurement strategy. Real examples are discussed revealing common pitfalls like hidden correlations and simulation results are compared with real measurements. Special emphasis is given to 2 different cases: 1) the optimization of the data set of multi-head metrology tools (NI-OCD, SE-OCD), 2) the optimization of the azimuth measurement angle in SE-OCD. With the uncertainty and sensitivity analysis result, the right data set and measurement mode (NI-OCD, SE-OCD or NI+SE OCD) can be easily selected to achieve the best OCD model performance.
SCALE-6 Sensitivity/Uncertainty Methods and Covariance Data
Williams, Mark L; Rearden, Bradley T
2008-01-01
Computational methods and data used for sensitivity and uncertainty analysis within the SCALE nuclear analysis code system are presented. The methodology used to calculate sensitivity coefficients and similarity coefficients and to perform nuclear data adjustment is discussed. A description is provided of the SCALE-6 covariance library based on ENDF/B-VII and other nuclear data evaluations, supplemented by 'low-fidelity' approximate covariances. SCALE (Standardized Computer Analyses for Licensing Evaluation) is a modular code system developed by Oak Ridge National Laboratory (ORNL) to perform calculations for criticality safety, reactor physics, and radiation shielding applications. SCALE calculations typically use sequences that execute a predefined series of executable modules to compute particle fluxes and responses like the critical multiplication factor. SCALE also includes modules for sensitivity and uncertainty (S/U) analysis of calculated responses. The S/U codes in SCALE are collectively referred to as TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation). SCALE-6-scheduled for release in 2008-contains significant new capabilities, including important enhancements in S/U methods and data. The main functions of TSUNAMI are to (a) compute nuclear data sensitivity coefficients and response uncertainties, (b) establish similarity between benchmark experiments and design applications, and (c) reduce uncertainty in calculated responses by consolidating integral benchmark experiments. TSUNAMI includes easy-to-use graphical user interfaces for defining problem input and viewing three-dimensional (3D) geometries, as well as an integrated plotting package.
Peer review of HEDR uncertainty and sensitivity analyses plan
Hoffman, F.O.
1993-06-01
This report consists of a detailed documentation of the writings and deliberations of the peer review panel that met on May 24--25, 1993 in Richland, Washington to evaluate your draft report ``Uncertainty/Sensitivity Analysis Plan`` (PNWD-2124 HEDR). The fact that uncertainties are being considered in temporally and spatially varying parameters through the use of alternative time histories and spatial patterns deserves special commendation. It is important to identify early those model components and parameters that will have the most influence on the magnitude and uncertainty of the dose estimates. These are the items that should be investigated most intensively prior to committing to a final set of results.
Sensitivity of wildlife habitat models to uncertainties in GIS data
NASA Technical Reports Server (NTRS)
Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.
1992-01-01
Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.
Sensitivity and uncertainty analysis for Abreu & Johnson numerical vapor intrusion model.
Ma, Jie; Yan, Guangxu; Li, Haiyan; Guo, Shaohui
2016-03-01
This study conducted one-at-a-time (OAT) sensitivity and uncertainty analysis for a numerical vapor intrusion model for nine input parameters, including soil porosity, soil moisture, soil air permeability, aerobic biodegradation rate, building depressurization, crack width, floor thickness, building volume, and indoor air exchange rate. Simulations were performed for three soil types (clay, silt, and sand), two source depths (3 and 8m), and two source concentrations (1 and 400 g/m(3)). Model sensitivity and uncertainty for shallow and high-concentration vapor sources (3m and 400 g/m(3)) are much smaller than for deep and low-concentration sources (8m and 1g/m(3)). For high-concentration sources, soil air permeability, indoor air exchange rate, and building depressurization (for high permeable soil like sand) are key contributors to model output uncertainty. For low-concentration sources, soil porosity, soil moisture, aerobic biodegradation rate and soil gas permeability are key contributors to model output uncertainty. Another important finding is that impacts of aerobic biodegradation on vapor intrusion potential of petroleum hydrocarbons are negligible when vapor source concentration is high, because of insufficient oxygen supply that limits aerobic biodegradation activities. PMID:26619051
NASA Astrophysics Data System (ADS)
Carpenter, T. M.; Georgakakos, K. P.; Georgakakos, K. P.
2001-12-01
The current study focuses on the sensitivity of distributed model flow forecast uncertainty to the uncertainty in the radar rainfall input. Various studies estimate a 30 to 100% uncertainty in radar rainfall estimates from the operational NEXRAD radars. This study addresses the following questions: How does this uncertainty in rainfall input impact the flow simulations produced by a hydrologic model? How does this effect compare to the uncertainty in flow forecasts resulting from initial condition and model parametric uncertainty? The hydrologic model used, HRCDHM, is a catchment-based, distributed hydrologic model and accepts hourly precipitation input from the operational WSR-88D weather radar. A GIS is used to process digital terrain data, delineate sub-catchments of a given large watershed, and supply sub-catchment characteristics (subbasin area, stream length, stream slope and channel-network topology) to the hydrologic model components. HRCDHM uses an adaptation of the U.S. NWS operational Sacramento soil moisture accounting model to produce runoff for each sub-catchment within the larger study watershed. Kinematic or Muskingum-Cunge channel routing is implemented to combine and route sub-catchment flows through the channel network. Available spatial soils information is used to vary hydrologic model parameters from sub-catchment to sub-catchment. HRCDHM was applied to the 2,500 km2 Illinois River watershed in Arkansas and Oklahoma with outlet at Tahlequah, Oklahoma. The watershed is under the coverage of the operational WSR-88D radar at Tulsa, Oklahoma. For distributed modeling, the watershed area has been subdivided into sub-catchments with an average area of 80km2. Flow simulations are validated at various gauged locations within the watershed. A Monte Carlo framework was used to assess the sensitivity of the simulated flows to uncertainty in radar input for different radar error distributions (uniform or exponential), and to make comparisons to the flow
Sensitivity and uncertainty studies of the CRAC2 computer code.
Kocher, D C; Ward, R C; Killough, G G; Dunning, D E; Hicks, B B; Hosker, R P; Ku, J Y; Rao, K S
1987-12-01
We have studied the sensitivity of health impacts from nuclear reactor accidents, as predicted by the CRAC2 computer code, to the following sources of uncertainty: (1) the model for plume rise, (2) the model for wet deposition, (3) the meteorological bin-sampling procedure for selecting weather sequences with rain, (4) the dose conversion factors for inhalation as affected by uncertainties in the particle size of the carrier aerosol and the clearance rates of radionuclides from the respiratory tract, (5) the weathering half-time for external ground-surface exposure, and (6) the transfer coefficients for terrestrial foodchain pathways. Predicted health impacts usually showed little sensitivity to use of an alternative plume-rise model or a modified rain-bin structure in bin-sampling. Health impacts often were quite sensitive to use of an alternative wet-deposition model in single-trial runs with rain during plume passage, but were less sensitive to the model in bin-sampling runs. Uncertainties in the inhalation dose conversion factors had important effects on early injuries in single-trial runs. Latent cancer fatalities were moderately sensitive to uncertainties in the weathering half-time for ground-surface exposure, but showed little sensitivity to the transfer coefficients for terrestrial foodchain pathways. Sensitivities of CRAC2 predictions to uncertainties in the models and parameters also depended on the magnitude of the source term, and some of the effects on early health effects were comparable to those that were due only to selection of different sets of weather sequences in bin-sampling. PMID:3444936
Uncertainty and Sensitivity Analyses of Model Predictions of Solute Transport
NASA Astrophysics Data System (ADS)
Skaggs, T. H.; Suarez, D. L.; Goldberg, S. R.
2012-12-01
Soil salinity reduces crop production on about 50% of irrigated lands worldwide. One roadblock to increased use of advanced computer simulation tools for better managing irrigation water and soil salinity is that the models usually do not provide an estimate of the uncertainty in model predictions, which can be substantial. In this work, we investigate methods for putting confidence bounds on HYDRUS-1D simulations of solute leaching in soils. Uncertainties in model parameters estimated with pedotransfer functions are propagated through simulation model predictions using Monte Carlo simulation. Generalized sensitivity analyses indicate which parameters are most significant for quantifying uncertainty. The simulation results are compared with experimentally observed transport variability in a number of large, replicated lysimeters.
Uncertainty and Sensitivity Analyses of Duct Propagation Models
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Watson, Willie R.; Jones, Michael G.
2008-01-01
This paper presents results of uncertainty and sensitivity analyses conducted to assess the relative merits of three duct propagation codes. Results from this study are intended to support identification of a "working envelope" within which to use the various approaches underlying these propagation codes. This investigation considers a segmented liner configuration that models the NASA Langley Grazing Incidence Tube, for which a large set of measured data was available. For the uncertainty analysis, the selected input parameters (source sound pressure level, average Mach number, liner impedance, exit impedance, static pressure and static temperature) are randomly varied over a range of values. Uncertainty limits (95% confidence levels) are computed for the predicted values from each code, and are compared with the corresponding 95% confidence intervals in the measured data. Generally, the mean values of the predicted attenuation are observed to track the mean values of the measured attenuation quite well and predicted confidence intervals tend to be larger in the presence of mean flow. A two-level, six factor sensitivity study is also conducted in which the six inputs are varied one at a time to assess their effect on the predicted attenuation. As expected, the results demonstrate the liner resistance and reactance to be the most important input parameters. They also indicate the exit impedance is a significant contributor to uncertainty in the predicted attenuation.
Sensitivity and uncertainty analysis applied to the JHR reactivity prediction
Leray, O.; Vaglio-Gaudard, C.; Hudelot, J. P.; Santamarina, A.; Noguere, G.; Di-Salvo, J.
2012-07-01
The on-going AMMON program in EOLE reactor at CEA Cadarache (France) provides experimental results to qualify the HORUS-3D/N neutronics calculation scheme used for the design and safety studies of the new Material Testing Jules Horowitz Reactor (JHR). This paper presents the determination of technological and nuclear data uncertainties on the core reactivity and the propagation of the latter from the AMMON experiment to JHR. The technological uncertainty propagation was performed with a direct perturbation methodology using the 3D French stochastic code TRIPOLI4 and a statistical methodology using the 2D French deterministic code APOLLO2-MOC which leads to a value of 289 pcm (1{sigma}). The Nuclear Data uncertainty propagation relies on a sensitivity study on the main isotopes and the use of a retroactive marginalization method applied to the JEFF 3.1.1 {sup 27}Al evaluation in order to obtain a realistic multi-group covariance matrix associated with the considered evaluation. This nuclear data uncertainty propagation leads to a K{sub eff} uncertainty of 624 pcm for the JHR core and 684 pcm for the AMMON reference configuration core. Finally, transposition and reduction of the prior uncertainty were made using the Representativity method which demonstrates the similarity of the AMMON experiment with JHR (the representativity factor is 0.95). The final impact of JEFF 3.1.1 nuclear data on the Begin Of Life (BOL) JHR reactivity calculated by the HORUS-3D/N V4.0 is a bias of +216 pcm with an associated posterior uncertainty of 304 pcm (1{sigma}). (authors)
Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III
2004-01-01
A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.
Photogrammetry-Derived National Shoreline: Uncertainty and Sensitivity Analyses
NASA Astrophysics Data System (ADS)
Yao, F.; Parrish, C. E.; Calder, B. R.; Peeri, S.; Rzhanov, Y.
2013-12-01
Tidally-referenced shoreline data serve a multitude of purposes, ranging from nautical charting, to coastal change analysis, wetland migration studies, coastal planning, resource management and emergency management. To assess the suitability of the shoreline for a particular application, end users need not only the best available shoreline, but also reliable estimates of the uncertainty in the shoreline position. NOAA's National Geodetic Survey (NGS) is responsible for mapping the national shoreline depicted on NOAA nautical charts. Previous studies have focused on modeling the uncertainty in NGS shoreline derived from airborne lidar data, but, to date, these methods have not been extended to aerial imagery and photogrammetric shoreline extraction methods, which remain the primary shoreline mapping methods used by NGS. The aim of this study is to develop a rigorous total propagated uncertainty (TPU) model for shoreline compiled from both tide-coordinated and non-tide-coordinated aerial imagery and compiled using photogrammetric methods. The project site encompasses the strait linking Dennys Bay, Whiting Bay and Cobscook Bay in the 'Downeast' Maine coastal region. This area is of interest, due to the ecosystem services it provides, as well as its complex geomorphology. The region is characterized by a large tide range, strong tidal currents, numerous embayments, and coarse-sediment pocket beaches. Statistical methods were used to assess the uncertainty of shoreline in this site mapped using NGS's photogrammetric workflow, as well as to analyze the sensitivity of the mapped shoreline position to a variety of parameters, including elevation gradient in the intertidal zone. The TPU model developed in this work can easily be extended to other areas and may be facilitate estimation of uncertainty in inundation models and marsh migration models.
The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and Parameter Estimation (UA/SA/PE API) tool development, here fore referred to as the Calibration, Optimization, and Sensitivity and Uncertainty Algorithms API (COSU-API), was initially d...
Uncertainty in the analysis of the overall equipment effectiveness on the shop floor
NASA Astrophysics Data System (ADS)
Rößler, M. P.; Abele, E.
2013-06-01
In this article an approach will be presented which supports transparency regarding the effectiveness of manufacturing equipment by combining the fuzzy set theory with the method of the overall equipment effectiveness analysis. One of the key principles of lean production and also a fundamental task in production optimization projects is the prior analysis of the current state of a production system by the use of key performance indicators to derive possible future states. The current state of the art in overall equipment effectiveness analysis is usually performed by cumulating different machine states by means of decentralized data collection without the consideration of uncertainty. In manual data collection or semi-automated plant data collection systems the quality of derived data often diverges and leads optimization teams to distorted conclusions about the real optimization potential of manufacturing equipment. The method discussed in this paper is to help practitioners to get more reliable results in the analysis phase and so better results of optimization projects. Under consideration of a case study obtained results are discussed.
Sensitivity to Uncertainty in Asteroid Impact Risk Assessment
NASA Astrophysics Data System (ADS)
Mathias, D.; Wheeler, L.; Prabhu, D. K.; Aftosmis, M.; Dotson, J.; Robertson, D. K.
2015-12-01
The Engineering Risk Assessment (ERA) team at NASA Ames Research Center is developing a physics-based impact risk model for probabilistically assessing threats from potential asteroid impacts on Earth. The model integrates probabilistic sampling of asteroid parameter ranges with physics-based analyses of entry, breakup, and impact to estimate damage areas and casualties from various impact scenarios. Assessing these threats is a highly coupled, dynamic problem involving significant uncertainties in the range of expected asteroid characteristics, how those characteristics may affect the level of damage, and the fidelity of various modeling approaches and assumptions. The presented model is used to explore the sensitivity of impact risk estimates to these uncertainties in order to gain insight into what additional data or modeling refinements are most important for producing effective, meaningful risk assessments. In the extreme cases of very small or very large impacts, the results are generally insensitive to many of the characterization and modeling assumptions. However, the nature of the sensitivity can change across moderate-sized impacts. Results will focus on the value of additional information in this critical, mid-size range, and how this additional data can support more robust mitigation decisions.
Uncertainty estimates in broadband seismometer sensitivities using microseisms
NASA Astrophysics Data System (ADS)
Ringler, A. T.; Storm, T.; Gee, L. S.; Hutt, C. R.; Wilson, D.
2015-04-01
The midband sensitivity of a seismic instrument is one of the fundamental parameters used in published station metadata. Any errors in this value can compromise amplitude estimates in otherwise high-quality data. To estimate an upper bound in the uncertainty of the midband sensitivity for modern broadband instruments, we compare daily microseism (4- to 8-s period) amplitude ratios between the vertical components of colocated broadband sensors across the IRIS/USGS (network code IU) seismic network. We find that the mean of the 145,972 daily ratios used between 2002 and 2013 is 0.9895 with a standard deviation of 0.0231. This suggests that the ratio between instruments shows a small bias and considerable scatter. We also find that these ratios follow a standard normal distribution ( R 2 = 0.95442), which suggests that the midband sensitivity of an instrument has an error of no greater than ±6 % with a 99 % confidence interval. This gives an upper bound on the precision to which we know the sensitivity of a fielded instrument.
Uncertainty estimates in broadband seismometer sensitivities using microseisms
Ringler, Adam T.; Storm, Tyler L.; Gee, Lind S.; Hutt, Charles R.; Wilson, David C.
2015-01-01
The midband sensitivity of a seismic instrument is one of the fundamental parameters used in published station metadata. Any errors in this value can compromise amplitude estimates in otherwise high-quality data. To estimate an upper bound in the uncertainty of the midband sensitivity for modern broadband instruments, we compare daily microseism (4- to 8-s period) amplitude ratios between the vertical components of colocated broadband sensors across the IRIS/USGS (network code IU) seismic network. We find that the mean of the 145,972 daily ratios used between 2002 and 2013 is 0.9895 with a standard deviation of 0.0231. This suggests that the ratio between instruments shows a small bias and considerable scatter. We also find that these ratios follow a standard normal distribution (R 2 = 0.95442), which suggests that the midband sensitivity of an instrument has an error of no greater than ±6 % with a 99 % confidence interval. This gives an upper bound on the precision to which we know the sensitivity of a fielded instrument.
NASA Astrophysics Data System (ADS)
Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.
2014-12-01
Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping
Sensitivity of global model prediction to initial state uncertainty
NASA Astrophysics Data System (ADS)
Miguez-Macho, Gonzalo
The sensitivity of global and North American forecasts to uncertainties in the initial conditions is studied. The Utah Global Model is initialized with reanalysis data sets obtained from the National Centers for Environmental Prediction (NCEP) and the European Centre for Medium- Range Weather Forecasts (ECMWF). The differences between these analyses provide an estimate of initial uncertainty. The influence of certain scales of the initial uncertainty is tested in experiments with initial data change from NCEP to ECMWF reanalysis in a selected spectral band. Experiments are also done to determine the benefits of targeting local regions for forecast errors over North America. In these tests, NCEP initial data are replaced by ECMWF data in the considered region. The accuracy of predictions with initial data from either reanalysis only differs over the mid-latitudes of the Southern Hemisphere, where ECMWF initialized forecasts have somewhat greater skill. Results from the spectral experiments indicate that most of this benefit is explained by initial differences of the longwave components (wavenumbers 0-15). Approximately 67% of the 120-h global forecast difference produced by changing initial data from ECMWF to NCEP reanalyses is due to initial changes only in wavenumbers 0-15, and more than 85% of this difference is produced by initial changes in wavenumbers 0-20. The results suggest that large-scale errors of the initial state may play a more prominent role than suggested in some singular vector analyses, and favor global observational coverage to resolve the long waves. Results from the regional targeting experiments indicate that for forecast errors over North America, a systematic benefit comes only when the ``targeted'' region includes most of the north Pacific, pointing again at large scale errors as being prominent, even for midrange predictions over a local area.
Given the ubiquitous nature of phthalates in the environment and the potential for adverse human health impacts, there is a need to understand the potential human exposure. A three-compartment model is developed to estimate the emission rate of di-2-ethylhexyl phthalate (DEHP) f...
Neil, Louise; Olsson, Nora Choque; Pellicano, Elizabeth
2016-06-01
Guided by a recent theory that proposes fundamental differences in how autistic individuals deal with uncertainty, we investigated the extent to which the cognitive construct 'intolerance of uncertainty' and anxiety were related to parental reports of sensory sensitivities in 64 autistic and 85 typically developing children aged 6-14 years. Intolerance of uncertainty and anxiety explained approximately half the variance in autistic children's sensory sensitivities, but only around a fifth of the variance in typical children's sensory sensitivities. In children with autism only, intolerance of uncertainty remained a significant predictor of children's sensory sensitivities once the effects of anxiety were adjusted for. Our results suggest intolerance of uncertainty is a relevant construct to sensory sensitivities in children with and without autism. PMID:26864157
Sensitivity of collective action to uncertainty about climate tipping points
NASA Astrophysics Data System (ADS)
Barrett, Scott; Dannenberg, Astrid
2014-01-01
Despite more than two decades of diplomatic effort, concentrations of greenhouse gases continue to trend upwards, creating the risk that we may someday cross a threshold for `dangerous' climate change. Although climate thresholds are very uncertain, new research is trying to devise `early warning signals' of an approaching tipping point. This research offers a tantalizing promise: whereas collective action fails when threshold uncertainty is large, reductions in this uncertainty may bring about the behavioural change needed to avert a climate `catastrophe'. Here we present the results of an experiment, rooted in a game-theoretic model, showing that behaviour differs markedly either side of a dividing line for threshold uncertainty. On one side of the dividing line, where threshold uncertainty is relatively large, free riding proves irresistible and trust illusive, making it virtually inevitable that the tipping point will be crossed. On the other side, where threshold uncertainty is small, the incentive to coordinate is strong and trust more robust, often leading the players to avoid crossing the tipping point. Our results show that uncertainty must be reduced to this `good' side of the dividing line to stimulate the behavioural shift needed to avoid `dangerous' climate change.
NASA Astrophysics Data System (ADS)
Zhuo, L.; Mekonnen, M. M.; Hoekstra, A. Y.
2014-01-01
Water Footprint Assessment is a quickly growing field of research, but as yet little attention has been paid to the uncertainties involved. This study investigates the sensitivity of water footprint estimates to changes in important input variables and quantifies the size of uncertainty in water footprint estimates. The study focuses on the green (from rainfall) and blue (from irrigation) water footprint of producing maize, soybean, rice, and wheat in the Yellow River Basin in the period 1996-2005. A grid-based daily water balance model at a 5 by 5 arcmin resolution was applied to compute green and blue water footprints of the four crops in the Yellow River Basin in the period considered. The sensitivity and uncertainty analysis focused on the effects on water footprint estimates at basin level (in m3 t-1) of four key input variables: precipitation (PR), reference evapotranspiration (ET0), crop coefficient (Kc), and crop calendar. The one-at-a-time method was carried out to analyse the sensitivity of the water footprint of crops to fractional changes of individual input variables. Uncertainties in crop water footprint estimates were quantified through Monte Carlo simulations. The results show that the water footprint of crops is most sensitive to ET0 and Kc, followed by crop calendar and PR. Blue water footprints were more sensitive to input variability than green water footprints. The smaller the annual blue water footprint, the higher its sensitivity to changes in PR, ET0, and Kc. The uncertainties in the total water footprint of a crop due to combined uncertainties in climatic inputs (PR and ET0) were about ±20% (at 95% confidence interval). The effect of uncertainties in ET0 was dominant compared to that of precipitation. The uncertainties in the total water footprint of a crop as a result of combined key input uncertainties were on average ±26% (at 95% confidence level). The sensitivities and uncertainties differ across crop types, with highest sensitivities
Sensitivity and Uncertainty Analysis to Burn-up Estimates on ADS Using ACAB Code
Cabellos, O; Sanz, J; Rodriguez, A; Gonzalez, E; Embid, M; Alvarez, F; Reyes, S
2005-02-11
Within the scope of the Accelerator Driven System (ADS) concept for nuclear waste management applications, the burnup uncertainty estimates due to uncertainty in the activation cross sections (XSs) are important regarding both the safety and the efficiency of the waste burning process. We have applied both sensitivity analysis and Monte Carlo methodology to actinides burnup calculations in a lead-bismuth cooled subcritical ADS. The sensitivity analysis is used to identify the reaction XSs and the dominant chains that contribute most significantly to the uncertainty. The Monte Carlo methodology gives the burnup uncertainty estimates due to the synergetic/global effect of the complete set of XS uncertainties. These uncertainty estimates are valuable to assess the need of any experimental or systematic reevaluation of some uncertainty XSs for ADS.
Sensitivity and Uncertainty Analysis to Burnup Estimates on ADS using the ACAB Code
Cabellos, O.; Sanz, J.; Rodriguez, A.; Gonzalez, E.; Embid, M.; Alvarez, F.; Reyes, S.
2005-05-24
Within the scope of the Accelerator Driven System (ADS) concept for nuclear waste management applications, the burnup uncertainty estimates due to uncertainty in the activation cross sections (XSs) are important regarding both the safety and the efficiency of the waste burning process. We have applied both sensitivity analysis and Monte Carlo methodology to actinides burnup calculations in a lead-bismuth cooled subcritical ADS. The sensitivity analysis is used to identify the reaction XSs and the dominant chains that contribute most significantly to the uncertainty. The Monte Carlo methodology gives the burnup uncertainty estimates due to the synergetic/global effect of the complete set of XS uncertainties. These uncertainty estimates are valuable to assess the need of any experimental or systematic re-evaluation of some uncertainty XSs for ADS.
Uncertainty and sensitivity assessments of GPS and GIS integrated applications for transportation.
Hong, Sungchul; Vonderohe, Alan P
2014-01-01
Uncertainty and sensitivity analysis methods are introduced, concerning the quality of spatial data as well as that of output information from Global Positioning System (GPS) and Geographic Information System (GIS) integrated applications for transportation. In the methods, an error model and an error propagation method form a basis for formulating characterization and propagation of uncertainties. They are developed in two distinct approaches: analytical and simulation. Thus, an initial evaluation is performed to compare and examine uncertainty estimations from the analytical and simulation approaches. The evaluation results show that estimated ranges of output information from the analytical and simulation approaches are compatible, but the simulation approach rather than the analytical approach is preferred for uncertainty and sensitivity analyses, due to its flexibility and capability to realize positional errors in both input data. Therefore, in a case study, uncertainty and sensitivity analyses based upon the simulation approach is conducted on a winter maintenance application. The sensitivity analysis is used to determine optimum input data qualities, and the uncertainty analysis is then applied to estimate overall qualities of output information from the application. The analysis results show that output information from the non-distance-based computation model is not sensitive to positional uncertainties in input data. However, for the distance-based computational model, output information has a different magnitude of uncertainties, depending on position uncertainties in input data. PMID:24518894
Uncertainty and Sensitivity Assessments of GPS and GIS Integrated Applications for Transportation
Hong, Sungchul; Vonderohe, Alan P.
2014-01-01
Uncertainty and sensitivity analysis methods are introduced, concerning the quality of spatial data as well as that of output information from Global Positioning System (GPS) and Geographic Information System (GIS) integrated applications for transportation. In the methods, an error model and an error propagation method form a basis for formulating characterization and propagation of uncertainties. They are developed in two distinct approaches: analytical and simulation. Thus, an initial evaluation is performed to compare and examine uncertainty estimations from the analytical and simulation approaches. The evaluation results show that estimated ranges of output information from the analytical and simulation approaches are compatible, but the simulation approach rather than the analytical approach is preferred for uncertainty and sensitivity analyses, due to its flexibility and capability to realize positional errors in both input data. Therefore, in a case study, uncertainty and sensitivity analyses based upon the simulation approach is conducted on a winter maintenance application. The sensitivity analysis is used to determine optimum input data qualities, and the uncertainty analysis is then applied to estimate overall qualities of output information from the application. The analysis results show that output information from the non-distance-based computation model is not sensitive to positional uncertainties in input data. However, for the distance-based computational model, output information has a different magnitude of uncertainties, depending on position uncertainties in input data. PMID:24518894
UNCERTAINTY AND SENSITIVITY ANALYSES FOR VERY HIGH ORDER MODELS
While there may in many cases be high potential for exposure of humans and ecosystems to chemicals released from a source, the degree to which this potential is realized is often uncertain. Conceptually, uncertainties are divided among parameters, model, and modeler during simula...
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar
2016-01-01
This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.
NASA Astrophysics Data System (ADS)
Ťupek, Boris; Launiainen, Samuli; Peltoniemi, Mikko; Heikkinen, Jukka; Lehtonen, Aleksi
2016-04-01
Litter decomposition rates of the most process based soil carbon models affected by environmental conditions are linked with soil heterotrophic CO2 emissions and serve for estimating soil carbon sequestration; thus due to the mass balance equation the variation in measured litter inputs and measured heterotrophic soil CO2 effluxes should indicate soil carbon stock changes, needed by soil carbon management for mitigation of anthropogenic CO2 emissions, if sensitivity functions of the applied model suit to the environmental conditions e.g. soil temperature and moisture. We evaluated the response forms of autotrophic and heterotrophic forest floor respiration to soil temperature and moisture in four boreal forest sites of the International Cooperative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests) by a soil trenching experiment during year 2015 in southern Finland. As expected both autotrophic and heterotrophic forest floor respiration components were primarily controlled by soil temperature and exponential regression models generally explained more than 90% of the variance. Soil moisture regression models on average explained less than 10% of the variance and the response forms varied between Gaussian for the autotrophic forest floor respiration component and linear for the heterotrophic forest floor respiration component. Although the percentage of explained variance of soil heterotrophic respiration by the soil moisture was small, the observed reduction of CO2 emissions with higher moisture levels suggested that soil moisture response of soil carbon models not accounting for the reduction due to excessive moisture should be re-evaluated in order to estimate right levels of soil carbon stock changes. Our further study will include evaluation of process based soil carbon models by the annual heterotrophic respiration and soil carbon stocks.
TSUNAMI Primer: A Primer for Sensitivity/Uncertainty Calculations with SCALE
Rearden, Bradley T; Mueller, Don; Bowman, Stephen M; Busch, Robert D.; Emerson, Scott
2009-01-01
This primer presents examples in the application of the SCALE/TSUNAMI tools to generate k{sub eff} sensitivity data for one- and three-dimensional models using TSUNAMI-1D and -3D and to examine uncertainties in the computed k{sub eff} values due to uncertainties in the cross-section data used in their calculation. The proper use of unit cell data and need for confirming the appropriate selection of input parameters through direct perturbations are described. The uses of sensitivity and uncertainty data to identify and rank potential sources of computational bias in an application system and TSUNAMI tools for assessment of system similarity using sensitivity and uncertainty criteria are demonstrated. Uses of these criteria in trending analyses to assess computational biases, bias uncertainties, and gap analyses are also described. Additionally, an application of the data adjustment tool TSURFER is provided, including identification of specific details of sources of computational bias.
Estimating model predictive uncertainty is imperative to informed environmental decision making and management of water resources. This paper applies the Generalized Sensitivity Analysis (GSA) to examine parameter sensitivity and the Generalized Likelihood Uncertainty Estimation...
Users manual for the FORSS sensitivity and uncertainty analysis code system
Lucius, J.L.; Weisbin, C.R.; Marable, J.H.; Drischler, J.D.; Wright, R.Q.; White, J.E.
1981-01-01
FORSS is a code system used to study relationships between nuclear reaction cross sections, integral experiments, reactor performance parameter predictions and associated uncertainties. This report describes the computing environment and the modules currently used to implement FORSS Sensitivity and Uncertainty Methodology.
Sensitivity and uncertainty analysis of reactivities for UO2 and MOX fueled PWR cells
Foad, Basma; Takeda, Toshikazu
2015-12-31
The purpose of this paper is to apply our improved method for calculating sensitivities and uncertainties of reactivity responses for UO{sub 2} and MOX fueled pressurized water reactor cells. The improved method has been used to calculate sensitivity coefficients relative to infinite dilution cross-sections, where the self-shielding effect is taken into account. Two types of reactivities are considered: Doppler reactivity and coolant void reactivity, for each type of reactivity, the sensitivities are calculated for small and large perturbations. The results have demonstrated that the reactivity responses have larger relative uncertainty than eigenvalue responses. In addition, the uncertainty of coolant void reactivity is much greater than Doppler reactivity especially for large perturbations. The sensitivity coefficients and uncertainties of both reactivities were verified by comparing with SCALE code results using ENDF/B-VII library and good agreements have been found.
PROBABILISTIC SENSITIVITY AND UNCERTAINTY ANALYSIS WORKSHOP SUMMARY REPORT
Seitz, R
2008-06-25
Stochastic or probabilistic modeling approaches are being applied more frequently in the United States and globally to quantify uncertainty and enhance understanding of model response in performance assessments for disposal of radioactive waste. This increased use has resulted in global interest in sharing results of research and applied studies that have been completed to date. This technical report reflects the results of a workshop that was held to share results of research and applied work related to performance assessments conducted at United States Department of Energy sites. Key findings of this research and applied work are discussed and recommendations for future activities are provided.
Calculating Sensitivities, Response and Uncertainties Within LODI for Precipitation Scavenging
Loosmore, G; Hsieh, H; Grant, K
2004-01-21
This paper describes an investigation into the uses of first-order, local sensitivity analysis in a Lagrangian dispersion code. The goal of the project is to gain knowledge not only about the sensitivity of the dispersion code predictions to the specific input parameters of interest, but also to better understand the uses and limitations of sensitivity analysis within such a context. The dispersion code of interest here is LODI, which is used for modeling emergency release scenarios at the Department of Energy's National Atmospheric Release Advisory Center (NARAC) at Lawrence Livermore National Laboratory. The NARAC system provides both real-time operational predictions and detailed assessments for atmospheric releases of hazardous materials. LODI is driven by a meteorological data assimilation model and an in-house version of COAMPS, the Naval Research Laboratory's mesoscale weather forecast model.
Uncertainty and Sensitivity Analysis in Performance Assessment for the Waste Isolation Pilot Plant
Helton, J.C.
1998-12-17
The Waste Isolation Pilot Plant (WIPP) is under development by the U.S. Department of Energy (DOE) for the geologic (deep underground) disposal of transuranic (TRU) waste. This development has been supported by a sequence of performance assessments (PAs) carried out by Sandla National Laboratories (SNL) to assess what is known about the WIPP and to provide .tidance for future DOE research and development activities. Uncertainty and sensitivity analysis procedures based on Latin hypercube sampling and regression techniques play an important role in these PAs by providing an assessment of the uncertainty in important analysis outcomes and identi~ing the sources of thk uncertainty. Performance assessments for the WIPP are conceptually and computational] y interesting due to regulatory requirements to assess and display the effects of both stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty, where stochastic uncertainty arises from the possible disruptions that could occur over the 10,000 yr regulatory period associated with the WIPP and subjective uncertainty arises from an inability to unambi-aously characterize the many models and associated parameters required in a PA for the WIPP. The interplay between uncertainty analysis, sensitivity analysis, stochastic uncertainty and subjective uncertainty are discussed and illustrated in the context of a recent PA carried out by SNL to support an application by the DOE to the U.S. Environmental Protection Agency for the certification of the WIPP for the disposal of TRU waste.
Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.
2002-01-01
An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Cacuci, Dan G.; Ionescu-Bujor, Mihaela
2004-07-15
Part II of this review paper highlights the salient features of the most popular statistical methods currently used for local and global sensitivity and uncertainty analysis of both large-scale computational models and indirect experimental measurements. These statistical procedures represent sampling-based methods (random sampling, stratified importance sampling, and Latin Hypercube sampling), first- and second-order reliability algorithms (FORM and SORM, respectively), variance-based methods (correlation ratio-based methods, the Fourier Amplitude Sensitivity Test, and the Sobol Method), and screening design methods (classical one-at-a-time experiments, global one-at-a-time design methods, systematic fractional replicate designs, and sequential bifurcation designs). It is emphasized that all statistical uncertainty and sensitivity analysis procedures first commence with the 'uncertainty analysis' stage and only subsequently proceed to the 'sensitivity analysis' stage; this path is the exact reverse of the conceptual path underlying the methods of deterministic sensitivity and uncertainty analysis where the sensitivities are determined prior to using them for uncertainty analysis. By comparison to deterministic methods, statistical methods for uncertainty and sensitivity analysis are relatively easier to develop and use but cannot yield exact values of the local sensitivities. Furthermore, current statistical methods have two major inherent drawbacks as follows: 1. Since many thousands of simulations are needed to obtain reliable results, statistical methods are at best expensive (for small systems) or, at worst, impracticable (e.g., for large time-dependent systems).2. Since the response sensitivities and parameter uncertainties are inherently and inseparably amalgamated in the results produced by these methods, improvements in parameter uncertainties cannot be directly propagated to improve response uncertainties; rather, the entire set of simulations and
Modelling survival: exposure pattern, species sensitivity and uncertainty.
Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B; Van den Brink, Paul J; Veltman, Karin; Vogel, Sören; Zimmer, Elke I; Preuss, Thomas G
2016-01-01
The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans. PMID:27381500
Modelling survival: exposure pattern, species sensitivity and uncertainty
Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I.; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B.; Van den Brink, Paul J.; Veltman, Karin; Vogel, Sören; Zimmer, Elke I.; Preuss, Thomas G.
2016-01-01
The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans. PMID:27381500
Modelling survival: exposure pattern, species sensitivity and uncertainty
NASA Astrophysics Data System (ADS)
Ashauer, Roman; Albert, Carlo; Augustine, Starrlight; Cedergreen, Nina; Charles, Sandrine; Ducrot, Virginie; Focks, Andreas; Gabsi, Faten; Gergs, André; Goussen, Benoit; Jager, Tjalling; Kramer, Nynke I.; Nyman, Anna-Maija; Poulsen, Veronique; Reichenberger, Stefan; Schäfer, Ralf B.; van den Brink, Paul J.; Veltman, Karin; Vogel, Sören; Zimmer, Elke I.; Preuss, Thomas G.
2016-07-01
The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how well GUTS, calibrated with short-term survival data of Gammarus pulex exposed to four pesticides, can forecast effects of longer-term pulsed exposures. Thirdly, we tested the ability of GUTS to estimate 14-day median effect concentrations of malathion for a range of species and use these estimates to build species sensitivity distributions for different exposure patterns. We find that GUTS adequately predicts survival across exposure patterns that vary over time. When toxicity is assessed for time-variable concentrations species may differ in their responses depending on the exposure profile. This can result in different species sensitivity rankings and safe levels. The interplay of exposure pattern and species sensitivity deserves systematic investigation in order to better understand how organisms respond to stress, including humans.
Energy Science and Technology Software Center (ESTSC)
1991-03-12
Version 00 SUSD calculates sensitivity coefficients for one- and two-dimensional transport problems. Variance and standard deviation of detector responses or design parameters can be obtained using cross-section covariance matrices. In neutron transport problems, this code can perform sensitivity-uncertainty analysis for secondary angular distribution (SAD) or secondary energy distribution (SED).
Robinson, Mike J F; Anselme, Patrick; Suchomel, Kristen; Berridge, Kent C
2015-08-01
Amphetamine and stress can sensitize mesolimbic dopamine-related systems. In Pavlovian autoshaping, repeated exposure to uncertainty of reward prediction can enhance motivated sign-tracking or attraction to a discrete reward-predicting cue (lever-conditioned stimulus; CS+), as well as produce cross-sensitization to amphetamine. However, it remains unknown how amphetamine sensitization or repeated restraint stress interact with uncertainty in controlling CS+ incentive salience attribution reflected in sign-tracking. Here rats were tested in 3 successive phases. First, different groups underwent either induction of amphetamine sensitization or repeated restraint stress, or else were not sensitized or stressed as control groups (either saline injections only, or no stress or injection at all). All next received Pavlovian autoshaping training under either certainty conditions (100% CS-UCS association) or uncertainty conditions (50% CS-UCS association and uncertain reward magnitude). During training, rats were assessed for sign-tracking to the CS+ lever versus goal-tracking to the sucrose dish. Finally, all groups were tested for psychomotor sensitization of locomotion revealed by an amphetamine challenge. Our results confirm that reward uncertainty enhanced sign-tracking attraction toward the predictive CS+ lever, at the expense of goal-tracking. We also reported that amphetamine sensitization promoted sign-tracking even in rats trained under CS-UCS certainty conditions, raising them to sign-tracking levels equivalent to the uncertainty group. Combining amphetamine sensitization and uncertainty conditions did not add together to elevate sign-tracking further above the relatively high levels induced by either manipulation alone. In contrast, repeated restraint stress enhanced subsequent amphetamine-elicited locomotion, but did not enhance CS+ attraction. PMID:26076340
NASA Astrophysics Data System (ADS)
Zhuo, L.; Mekonnen, M. M.; Hoekstra, A. Y.
2014-06-01
Water Footprint Assessment is a fast-growing field of research, but as yet little attention has been paid to the uncertainties involved. This study investigates the sensitivity of and uncertainty in crop water footprint (in m3 t-1) estimates related to uncertainties in important input variables. The study focuses on the green (from rainfall) and blue (from irrigation) water footprint of producing maize, soybean, rice, and wheat at the scale of the Yellow River basin in the period 1996-2005. A grid-based daily water balance model at a 5 by 5 arcmin resolution was applied to compute green and blue water footprints of the four crops in the Yellow River basin in the period considered. The one-at-a-time method was carried out to analyse the sensitivity of the crop water footprint to fractional changes of seven individual input variables and parameters: precipitation (PR), reference evapotranspiration (ET0), crop coefficient (Kc), crop calendar (planting date with constant growing degree days), soil water content at field capacity (Smax), yield response factor (Ky) and maximum yield (Ym). Uncertainties in crop water footprint estimates related to uncertainties in four key input variables: PR, ET0, Kc, and crop calendar were quantified through Monte Carlo simulations. The results show that the sensitivities and uncertainties differ across crop types. In general, the water footprint of crops is most sensitive to ET0 and Kc, followed by the crop calendar. Blue water footprints were more sensitive to input variability than green water footprints. The smaller the annual blue water footprint is, the higher its sensitivity to changes in PR, ET0, and Kc. The uncertainties in the total water footprint of a crop due to combined uncertainties in climatic inputs (PR and ET0) were about ±20% (at 95% confidence interval). The effect of uncertainties in ET0was dominant compared to that of PR. The uncertainties in the total water footprint of a crop as a result of combined key input
Uncertainty and Sensitivity of Alternative Rn-222 Flux Density Models Used in Performance Assessment
Greg J. Shott, Vefa Yucel, Lloyd Desotell Non-Nstec Authors: G. Pyles and Jon Carilli
2007-06-01
Performance assessments for the Area 5 Radioactive Waste Management Site on the Nevada Test Site have used three different mathematical models to estimate Rn-222 flux density. This study describes the performance, uncertainty, and sensitivity of the three models which include the U.S. Nuclear Regulatory Commission Regulatory Guide 3.64 analytical method and two numerical methods. The uncertainty of each model was determined by Monte Carlo simulation using Latin hypercube sampling. The global sensitivity was investigated using Morris one-at-time screening method, sample-based correlation and regression methods, the variance-based extended Fourier amplitude sensitivity test, and Sobol's sensitivity indices. The models were found to produce similar estimates of the mean and median flux density, but to have different uncertainties and sensitivities. When the Rn-222 effective diffusion coefficient was estimated using five different published predictive models, the radon flux density models were found to be most sensitive to the effective diffusion coefficient model selected, the emanation coefficient, and the radionuclide inventory. Using a site-specific measured effective diffusion coefficient significantly reduced the output uncertainty. When a site-specific effective-diffusion coefficient was used, the models were most sensitive to the emanation coefficient and the radionuclide inventory.
Visualization tools for uncertainty and sensitivity analyses on thermal-hydraulic transients
NASA Astrophysics Data System (ADS)
Popelin, Anne-Laure; Iooss, Bertrand
2014-06-01
In nuclear engineering studies, uncertainty and sensitivity analyses of simulation computer codes can be faced to the complexity of the input and/or the output variables. If these variables represent a transient or a spatial phenomenon, the difficulty is to provide tool adapted to their functional nature. In this paper, we describe useful visualization tools in the context of uncertainty analysis of model transient outputs. Our application involves thermal-hydraulic computations for safety studies of nuclear pressurized water reactors.
Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications
Arbanas, Goran; Williams, Mark L; Leal, Luiz C; Dunn, Michael E; Khuwaileh, Bassam A.; Wang, C; Abdel-Khalik, Hany
2015-01-01
The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimiator (INSURE) module of the AMPX system [1]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way. We show how the IS/UQ method could be used to optimize uncertainties of IBEs and differential cross section data simultaneously.
Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications
NASA Astrophysics Data System (ADS)
Arbanas, G.; Williams, M. L.; Leal, L. C.; Dunn, M. E.; Khuwaileh, B. A.; Wang, C.; Abdel-Khalik, H.
2015-01-01
The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimator (INSURE) module of the AMPX cross section processing system [M.E. Dunn and N.M. Greene, "AMPX-2000: A Cross-Section Processing System for Generating Nuclear Data for Criticality Safety Applications," Trans. Am. Nucl. Soc. 86, 118-119 (2002)]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore, we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way and how it could be used to optimize uncertainties of IBEs and differential cross section data simultaneously. We itemize contributions to the cost of differential data measurements needed to define a realistic cost function.
Sensitivity and uncertainty in the effective delayed neutron fraction ({beta}{sub eff})
Kodeli, I. I.
2012-07-01
Precise knowledge of effective delayed neutron fraction ({beta}{sub eff}) and of the corresponding uncertainty is important for reactor safety analysis. The interest in developing the methodology for estimating the uncertainty in {beta}{sub eff} was expressed in the scope of the UAM project of the OECD/NEA. A novel approach for the calculation of the nuclear data sensitivity and uncertainty of the effective delayed neutron fraction is proposed, based on the linear perturbation theory. The method allows the detailed analysis of components of {beta}{sub eff} uncertainty. The procedure was implemented in the SUSD3D sensitivity and uncertainty code applied to several fast neutron benchmark experiments from the ICSBEP and IRPhE databases. According to the JENDL-4 covariance matrices and taking into account the uncertainty in the cross sections and in the prompt and delayed fission spectra the total uncertainty in {beta}eff was found to be of the order of {approx}2 to {approx}3.5 % for the studied fast experiments. (authors)
Advancing Inverse Sensitivity/Uncertainty Methods for Nuclear Fuel Cycle Applications
Arbanas, G.; Williams, M.L.; Leal, L.C.; Dunn, M.E.; Khuwaileh, B.A.; Wang, C.; Abdel-Khalik, H.
2015-01-15
The inverse sensitivity/uncertainty quantification (IS/UQ) method has recently been implemented in the Inverse Sensitivity/UnceRtainty Estimator (INSURE) module of the AMPX cross section processing system [M.E. Dunn and N.M. Greene, “AMPX-2000: A Cross-Section Processing System for Generating Nuclear Data for Criticality Safety Applications,” Trans. Am. Nucl. Soc. 86, 118–119 (2002)]. The IS/UQ method aims to quantify and prioritize the cross section measurements along with uncertainties needed to yield a given nuclear application(s) target response uncertainty, and doing this at a minimum cost. Since in some cases the extant uncertainties of the differential cross section data are already near the limits of the present-day state-of-the-art measurements, requiring significantly smaller uncertainties may be unrealistic. Therefore, we have incorporated integral benchmark experiments (IBEs) data into the IS/UQ method using the generalized linear least-squares method, and have implemented it in the INSURE module. We show how the IS/UQ method could be applied to systematic and statistical uncertainties in a self-consistent way and how it could be used to optimize uncertainties of IBEs and differential cross section data simultaneously. We itemize contributions to the cost of differential data measurements needed to define a realistic cost function.
NASA Astrophysics Data System (ADS)
van den Brink, Cors; Zaadnoordijk, Willem Jan; Burgers, Saskia; Griffioen, Jasper
2008-11-01
SummaryGroundwater quality management relies more and more on models in recent years. These models are used to predict the risk of groundwater contamination for various land uses. This paper presents an assessment of uncertainties and sensitivities to input parameters for a regional model. The model had been set up to improve and facilitate the decision-making process between stakeholders and in a groundwater quality conflict. The stochastic uncertainty and sensitivity analysis comprised a Monte Carlo simulation technique in combination with a Latin hypercube sampling procedure. The uncertainty of the calculated concentrations of nitrate leached into groundwater was assessed for the various combinations of land use, soil type, and depth of the groundwater table in a vulnerable, sandy region in The Netherlands. The uncertainties in the shallow groundwater were used to assess the uncertainty of the nitrate concentration in the abstracted groundwater. The confidence intervals of the calculated nitrate concentrations in shallow groundwater for agricultural land use functions did not overlap with those of non-agricultural land use such as nature, indicating significantly different nitrate leaching in these areas. The model results were sensitive for almost all input parameters analyzed. However, the NSS is considered pretty robust because no shifts in uncertainty between factors occurred between factors towards systematic changes in fertilizer and manure inputs of the scenarios. In view of these results, there is no need to collect more data to allow science based decision-making in this planning process.
Simpson, J.C.; Ramsdell, J.V. Jr.
1993-04-01
Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy`s (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.
Survey of sampling-based methods for uncertainty and sensitivity analysis.
Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD.; Storlie, Curt B. (Colorado State University, Fort Collins, CO)
2006-06-01
Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.
A Methodology For Performing Global Uncertainty And Sensitivity Analysis In Systems Biology
Marino, Simeone; Hogue, Ian B.; Ray, Christian J.; Kirschner, Denise E.
2008-01-01
Accuracy of results from mathematical and computer models of biological systems is often complicated by the presence of uncertainties in experimental data that are used to estimate parameter values. Current mathematical modeling approaches typically use either single-parameter or local sensitivity analyses. However, these methods do not accurately assess uncertainty and sensitivity in the system as, by default they hold all other parameters fixed at baseline values. Using techniques described within we demonstrate how a multi-dimensional parameter space can be studied globally so all uncertainties can be identified. Further, uncertainty and sensitivity analysis techniques can help to identify and ultimately control uncertainties. In this work we develop methods for applying existing analytical tools to perform analyses on a variety of mathematical and computer models. We compare two specific types of global sensitivity analysis indexes that have proven to be among the most robust and efficient. Through familiar and new examples of mathematical and computer models, we provide a complete methodology for performing these analyses, both in deterministic and stochastic settings, and propose novel techniques to handle problems encountered during this type of analyses. PMID:18572196
Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat
2016-01-01
The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.
Sensitivity and uncertainty in the effective delayed neutron fraction (βeff)
NASA Astrophysics Data System (ADS)
Kodeli, Ivan-Alexander
2013-07-01
Precise knowledge of the effective delayed neutron fraction (βeff) and the corresponding uncertainty is important for nuclear reactor safety analysis. The interest in developing the methodology for estimating the uncertainty in βeff was expressed in the scope of the UAM project of the OECD/NEA. The sensitivity and uncertainty analysis of βeff performed using the standard first-order perturbation code SUSD3D is presented. The sensitivity coefficients of βeff with respect to the basic nuclear data were calculated by deriving Bretscher's k-ratio formula. The procedure was applied to several fast neutron benchmark experiments selected from the ICSBEP and IRPhE databases. According to the JENDL-4.0m covariance matrices and taking into account the uncertainties in the cross-sections and in the prompt and delayed fission spectra the total uncertainty in βeff was found to be in general around 3%, and up to ˜7% for the 233U benchmarks. An approximation was applied to investigate the uncertainty due to the delayed fission neutron spectra. The βeff sensitivity and uncertainty analyses are furthermore demonstrated to be useful for the better understanding and interpretation of the physical phenomena involved. Due to their specific sensitivity profiles the βeff measurements are shown to provide valuable complementary information which could be used in combination with the criticality (keff) measurements for the evaluation and validation of certain nuclear reaction data, such as for example the delayed (and prompt) fission neutron yields and interestingly also the 238U inelastic and elastic scattering cross-sections.
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.
Sensitivity and Uncertainty Analysis in Chemical Mechanisms for Air Quality Modeling
NASA Astrophysics Data System (ADS)
Gao, Dongfen
1995-01-01
Ambient ozone in urban and regional air pollution is a serious environmental problem. Air quality models can be used to predict ozone concentrations and explore control strategies. One important component of such air quality models is a chemical mechanism. Sensitivity and uncertainty analysis play an important role in the evaluation of the performance of air quality models. The uncertainties associated with the RADM2 chemical mechanism in predicted concentrations of O_3, HCHO, H _2rm O_2, PAN, and HNO _3 were estimated. Monte Carlo simulations with Latin Hypercube Sampling were used to estimate the overall uncertainties in concentrations of species of interest, due to uncertainties in chemical parameters. The parameters that were treated as random variables were identified through first-order sensitivity and uncertainty analyses. Recent estimates of uncertainties in rate parameters and product yields were used. The results showed the relative uncertainties in ozone predictions are +/-23-50% (1sigma relative to the mean) in urban cases, and less than +/-20% in rural cases. Uncertainties in HNO_3 concentrations are the smallest, followed by HCHO, O_3 and PAN. Predicted H_2rm O_2 concentrations have the highest uncertainties. Uncertainties in the differences of peak ozone concentrations between base and control cases were also studied. The results show that the uncertainties in the fractional reductions in ozone concentrations were 9-12% with NO_{rm x} control at an ROG/NO_{rm x} ratio of 24:1 and 11-33% with ROG control at an ROG/NO _{rm x} ratio of 6:1. Linear regression analysis of the Monte Carlo results showed that uncertainties in rate parameters for the formation of HNO_3, for the reaction of HCHO + hv to 2HO _2 + CO, for PAN chemistry and for the photolysis of NO_2 are most influential to ozone concentrations and differences of ozone. The parameters that are important to ozone concentrations also tend to be relatively influential to other key species
Flood damage maps: ranking sources of uncertainty with variance-based sensitivity analysis
NASA Astrophysics Data System (ADS)
Saint-Geours, N.; Grelot, F.; Bailly, J.-S.; Lavergne, C.
2012-04-01
In order to increase the reliability of flood damage assessment, we need to question the uncertainty associated with the whole flood risk modeling chain. Using a case study on the basin of the Orb River, France, we demonstrate how variance-based sensitivity analysis can be used to quantify uncertainty in flood damage maps at different spatial scales and to identify the sources of uncertainty which should be reduced first. Flood risk mapping is recognized as an effective tool in flood risk management and the elaboration of flood risk maps is now required for all major river basins in the European Union (European directive 2007/60/EC). Flood risk maps can be based on the computation of the Mean Annual Damages indicator (MAD). In this approach, potential damages due to different flood events are estimated for each individual stake over the study area, then averaged over time - using the return period of each flood event - and finally mapped. The issue of uncertainty associated with these flood damage maps should be carefully scrutinized, as they are used to inform the relevant stakeholders or to design flood mitigation measures. Maps of the MAD indicator are based on the combination of hydrological, hydraulic, geographic and economic modeling efforts: as a result, numerous sources of uncertainty arise in their elaboration. Many recent studies describe these various sources of uncertainty (Koivumäki 2010, Bales 2009). Some authors propagate these uncertainties through the flood risk modeling chain and estimate confidence bounds around the resulting flood damage estimates (de Moel 2010). It would now be of great interest to go a step further and to identify which sources of uncertainty account for most of the variability in Mean Annual Damages estimates. We demonstrate the use of variance-based sensitivity analysis to rank sources of uncertainty in flood damage mapping and to quantify their influence on the accuracy of flood damage estimates. We use a quasi
NASA Astrophysics Data System (ADS)
Lee, L. A.; Carslaw, K. S.; Pringle, K. J.
2012-04-01
Global aerosol contributions to radiative forcing (and hence climate change) are persistently subject to large uncertainty in successive Intergovernmental Panel on Climate Change (IPCC) reports (Schimel et al., 1996; Penner et al., 2001; Forster et al., 2007). As such more complex global aerosol models are being developed to simulate aerosol microphysics in the atmosphere. The uncertainty in global aerosol model estimates is currently estimated by measuring the diversity amongst different models (Textor et al., 2006, 2007; Meehl et al., 2007). The uncertainty at the process level due to the need to parameterise in such models is not yet understood and it is difficult to know whether the added model complexity comes at a cost of high model uncertainty. In this work the model uncertainty and its sources due to the uncertain parameters is quantified using variance-based sensitivity analysis. Due to the complexity of a global aerosol model we use Gaussian process emulation with a sufficient experimental design to make such as a sensitivity analysis possible. The global aerosol model used here is GLOMAP (Mann et al., 2010) and we quantify the sensitivity of numerous model outputs to 27 expertly elicited uncertain model parameters describing emissions and processes such as growth and removal of aerosol. Using the R package DiceKriging (Roustant et al., 2010) along with the package sensitivity (Pujol, 2008) it has been possible to produce monthly global maps of model sensitivity to the uncertain parameters over the year 2008. Global model outputs estimated by the emulator are shown to be consistent with previously published estimates (Spracklen et al. 2010, Mann et al. 2010) but now we have an associated measure of parameter uncertainty and its sources. It can be seen that globally some parameters have no effect on the model predictions and any further effort in their development may be unnecessary, although a structural error in the model might also be identified. The
Energy Science and Technology Software Center (ESTSC)
1981-02-02
Version: 00 SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections (of standard multigroup cross-section sets) and for secondary energy distributions (SED's) of multigroup scattering matrices.
Sensitivity Analysis and Uncertainty Propagation in a General-Purpose Thermal Analysis Code
Blackwell, Bennie F.; Dowding, Kevin J.
1999-08-04
Methods are discussed for computing the sensitivity of field variables to changes in material properties and initial/boundary condition parameters for heat transfer problems. The method we focus on is termed the ''Sensitivity Equation Method'' (SEM). It involves deriving field equations for sensitivity coefficients by differentiating the original field equations with respect to the parameters of interest and numerically solving the resulting sensitivity field equations. Uncertainty in the model parameters are then propagated through the computational model using results derived from first-order perturbation theory; this technique is identical to the methodology typically used to propagate experimental uncertainty. Numerical results are presented for the design of an experiment to estimate the thermal conductivity of stainless steel using transient temperature measurements made on prototypical hardware of a companion contact conductance experiment. Comments are made relative to extending the SEM to conjugate heat transfer problems.
Haihua Zhao; Vincent A. Mousseau; Nam T. Dinh
2010-10-01
Code Scaling, Applicability, and Uncertainty (CSAU) methodology was developed in late 1980s by US NRC to systematically quantify reactor simulation uncertainty. Basing on CSAU methodology, Best Estimate Plus Uncertainty (BEPU) methods have been developed and widely used for new reactor designs and existing LWRs power uprate. In spite of these successes, several aspects of CSAU have been criticized for further improvement: i.e., (1) subjective judgement in PIRT process; (2) high cost due to heavily relying large experimental database, needing many experts man-years work, and very high computational overhead; (3) mixing numerical errors with other uncertainties; (4) grid dependence and same numerical grids for both scaled experiments and real plants applications; (5) user effects; Although large amount of efforts have been used to improve CSAU methodology, the above issues still exist. With the effort to develop next generation safety analysis codes, new opportunities appear to take advantage of new numerical methods, better physical models, and modern uncertainty qualification methods. Forward sensitivity analysis (FSA) directly solves the PDEs for parameter sensitivities (defined as the differential of physical solution with respective to any constant parameter). When the parameter sensitivities are available in a new advanced system analysis code, CSAU could be significantly improved: (1) Quantifying numerical errors: New codes which are totally implicit and with higher order accuracy can run much faster with numerical errors quantified by FSA. (2) Quantitative PIRT (Q-PIRT) to reduce subjective judgement and improving efficiency: treat numerical errors as special sensitivities against other physical uncertainties; only parameters having large uncertainty effects on design criterions are considered. (3) Greatly reducing computational costs for uncertainty qualification by (a) choosing optimized time steps and spatial sizes; (b) using gradient information
Sensitivity and uncertainty analyses for thermo-hydraulic calculation of research reactor
Hartini, Entin; Andiwijayakusuma, Dinan; Isnaeni, Muh Darwis
2013-09-09
The sensitivity and uncertainty analysis of input parameters on thermohydraulic calculations for a research reactor has successfully done in this research. The uncertainty analysis was carried out on input parameters for thermohydraulic calculation of sub-channel analysis using Code COOLOD-N. The input parameters include radial peaking factor, the increase bulk coolant temperature, heat flux factor and the increase temperature cladding and fuel meat at research reactor utilizing plate fuel element. The input uncertainty of 1% - 4% were used in nominal power calculation. The bubble detachment parameters were computed for S ratio (the safety margin against the onset of flow instability ratio) which were used to determine safety level in line with the design of 'Reactor Serba Guna-G. A. Siwabessy' (RSG-GA Siwabessy). It was concluded from the calculation results that using the uncertainty input more than 3% was beyond the safety margin of reactor operation.
Sensitivity and uncertainty analyses for thermo-hydraulic calculation of research reactor
NASA Astrophysics Data System (ADS)
Hartini, Entin; Andiwijayakusuma, Dinan; Isnaeni, Muh Darwis
2013-09-01
The sensitivity and uncertainty analysis of input parameters on thermohydraulic calculations for a research reactor has successfully done in this research. The uncertainty analysis was carried out on input parameters for thermohydraulic calculation of sub-channel analysis using Code COOLOD-N. The input parameters include radial peaking factor, the increase bulk coolant temperature, heat flux factor and the increase temperature cladding and fuel meat at research reactor utilizing plate fuel element. The input uncertainty of 1% - 4% were used in nominal power calculation. The bubble detachment parameters were computed for S ratio (the safety margin against the onset of flow instability ratio) which were used to determine safety level in line with the design of "Reactor Serba Guna-G. A. Siwabessy" (RSG-GA Siwabessy). It was concluded from the calculation results that using the uncertainty input more than 3% was beyond the safety margin of reactor operation.
Sin, Gürkan; Gernaey, Krist V; Neumann, Marc B; van Loosdrecht, Mark C M; Gujer, Willi
2011-01-01
This study demonstrates the usefulness of global sensitivity analysis in wastewater treatment plant (WWTP) design to prioritize sources of uncertainty and quantify their impact on performance criteria. The study, which is performed with the Benchmark Simulation Model no. 1 plant design, complements a previous paper on input uncertainty characterisation and propagation (Sin et al., 2009). A sampling-based sensitivity analysis is conducted to compute standardized regression coefficients. It was found that this method is able to decompose satisfactorily the variance of plant performance criteria (with R(2) > 0.9) for effluent concentrations, sludge production and energy demand. This high extent of linearity means that the plant performance criteria can be described as linear functions of the model inputs under the defined plant conditions. In effect, the system of coupled ordinary differential equations can be replaced by multivariate linear models, which can be used as surrogate models. The importance ranking based on the sensitivity measures demonstrates that the most influential factors involve ash content and influent inert particulate COD among others, largely responsible for the uncertainty in predicting sludge production and effluent ammonium concentration. While these results were in agreement with process knowledge, the added value is that the global sensitivity methods can quantify the contribution of the variance of significant parameters, e.g., ash content explains 70% of the variance in sludge production. Further the importance of formulating appropriate sensitivity analysis scenarios that match the purpose of the model application needs to be highlighted. Overall, the global sensitivity analysis proved a powerful tool for explaining and quantifying uncertainties as well as providing insight into devising useful ways for reducing uncertainties in the plant performance. This information can help engineers design robust WWTP plants. PMID:20828785
How to assess the Efficiency and "Uncertainty" of Global Sensitivity Analysis?
NASA Astrophysics Data System (ADS)
Haghnegahdar, Amin; Razavi, Saman
2016-04-01
Sensitivity analysis (SA) is an important paradigm for understanding model behavior, characterizing uncertainty, improving model calibration, etc. Conventional "global" SA (GSA) approaches are rooted in different philosophies, resulting in different and sometime conflicting and/or counter-intuitive assessment of sensitivity. Moreover, most global sensitivity techniques are highly computationally demanding to be able to generate robust and stable sensitivity metrics over the entire model response surface. Accordingly, a novel sensitivity analysis method called Variogram Analysis of Response Surfaces (VARS) is introduced to overcome the aforementioned issues. VARS uses the Variogram concept to efficiently provide a comprehensive assessment of global sensitivity across a range of scales within the parameter space. Based on the VARS principles, in this study we present innovative ideas to assess (1) the efficiency of GSA algorithms and (2) the level of confidence we can assign to a sensitivity assessment. We use multiple hydrological models with different levels of complexity to explain the new ideas.
NASA Astrophysics Data System (ADS)
Schunker, H.; Schou, J.; Ball, W. H.
2016-02-01
Aims: We quantify the effect of observational spectroscopic and asteroseismic uncertainties on regularised least squares (RLS) inversions for the radial differential rotation of Sun-like and subgiant stars. Methods: We first solved the forward problem to model rotational splittings plus the observed uncertainties for models of a Sun-like star, HD 52265, and a subgiant star, KIC 7341231. We randomly perturbed the parameters of the stellar models within the uncertainties of the spectroscopic and asteroseismic constraints and used these perturbed stellar models to compute rotational splittings. We experimented with three rotation profiles: solid body rotation, a step function, and a smooth rotation profile decreasing with radius. We then solved the inverse problem to infer the radial differential rotation profile using a RLS inversion and kernels from the best-fit stellar model. We also compared RLS, optimally localised average (OLA) and direct functional fitting inversion techniques. Results: We found that the inversions for Sun-like stars with solar-like radial differential rotation profiles are insensitive to the uncertainties in the stellar models. The uncertainties in the splittings dominate the uncertainties in the inversions and solid body rotation is not excluded. We found that when the rotation rate below the convection zone is increased to six times that of the surface rotation rate the inferred rotation profile excluded solid body rotation. We showed that when we reduced the uncertainties in the splittings by a factor of about 100, the inversion is sensitive to the uncertainties in the stellar model. With the current observational uncertainties, we found that inversions of subgiant stars are sensitive to the uncertainties in the stellar model. Conclusions: Our findings suggest that inversions for the radial differential rotation of subgiant stars would benefit from more tightly constrained stellar models. We conclude that current observational uncertainties
Use of SUSA in Uncertainty and Sensitivity Analysis for INL VHTR Coupled Codes
Gerhard Strydom
2010-06-01
The need for a defendable and systematic Uncertainty and Sensitivity approach that conforms to the Code Scaling, Applicability, and Uncertainty (CSAU) process, and that could be used for a wide variety of software codes, was defined in 2008.The GRS (Gesellschaft für Anlagen und Reaktorsicherheit) company of Germany has developed one type of CSAU approach that is particularly well suited for legacy coupled core analysis codes, and a trial version of their commercial software product SUSA (Software for Uncertainty and Sensitivity Analyses) was acquired on May 12, 2010. This interim milestone report provides an overview of the current status of the implementation and testing of SUSA at the INL VHTR Project Office.
NASA Astrophysics Data System (ADS)
Djepa, Vera; Badii, Atta
2016-04-01
The sensitivity of weather and climate system to sea ice thickness (SIT), Sea Ice Draft (SID) and Snow Depth (SD) in the Arctic is recognized from various studies. Decrease in SIT will affect atmospheric circulation, temperature, precipitation and wind speed in the Arctic and beyond. Ice thermodynamics and dynamic properties depend strongly on sea Ice Density (ID) and SD. SIT, SID, ID and SD are sensitive to environmental changes in the Polar region and impact the climate system. For accurate forecast of climate change, sea ice mass balance, ocean circulation and sea- atmosphere interactions it is required to have long term records of SIT, SID, SD and ID with errors and uncertainty analyses. The SID, SIT, ID and freeboard (F) have been retrieved from Radar Altimeter (RA) (on board ENVISAT) and IceBridge Laser Altimeter (LA) and validated, using over 10 years -collocated observations of SID and SD in the Arctic, provided from the European Space Agency (ESA CCI sea ice ECV project). Improved algorithms to retrieve SIT from LA and RA have been derived, applying statistical analysis. The snow depth is obtained from AMSR-E/Aqua and NASA IceBridge Snow Depth radar. The sea ice properties of pancake ice have been retrieved from ENVISAT/Synthetic Aperture Radar (ASAR). The uncertainties of the retrieved climate variables have been analysed and the impact of snow depth and sea ice density on retrieved SIT has been estimated. The sensitivity analysis illustrates the impact of uncertainties of input climate variables (ID and SD) on accuracy of the retrieved output variables (SIT and SID). The developed methodology of uncertainty and sensitivity analysis is essential for assessment of the impact of environmental variables on climate change and better understanding of the relationship between input and output variables. The uncertainty analysis quantifies the uncertainties of the model results and the sensitivity analysis evaluates the contribution of each input variable to
Uncertainty and Sensitivity Analyses of a Two-Parameter Impedance Prediction Model
NASA Technical Reports Server (NTRS)
Jones, M. G.; Parrott, T. L.; Watson, W. R.
2008-01-01
This paper presents comparisons of predicted impedance uncertainty limits derived from Monte-Carlo-type simulations with a Two-Parameter (TP) impedance prediction model and measured impedance uncertainty limits based on multiple tests acquired in NASA Langley test rigs. These predicted and measured impedance uncertainty limits are used to evaluate the effects of simultaneous randomization of each input parameter for the impedance prediction and measurement processes. A sensitivity analysis is then used to further evaluate the TP prediction model by varying its input parameters on an individual basis. The variation imposed on the input parameters is based on measurements conducted with multiple tests in the NASA Langley normal incidence and grazing incidence impedance tubes; thus, the input parameters are assigned uncertainties commensurate with those of the measured data. These same measured data are used with the NASA Langley impedance measurement (eduction) processes to determine the corresponding measured impedance uncertainty limits, such that the predicted and measured impedance uncertainty limits (95% confidence intervals) can be compared. The measured reactance 95% confidence intervals encompass the corresponding predicted reactance confidence intervals over the frequency range of interest. The same is true for the confidence intervals of the measured and predicted resistance at near-resonance frequencies, but the predicted resistance confidence intervals are lower than the measured resistance confidence intervals (no overlap) at frequencies away from resonance. A sensitivity analysis indicates the discharge coefficient uncertainty is the major contributor to uncertainty in the predicted impedances for the perforate-over-honeycomb liner used in this study. This insight regarding the relative importance of each input parameter will be used to guide the design of experiments with test rigs currently being brought on-line at NASA Langley.
Elucidating uncertainty and sensitivity structures in environmental models can be a difficult task, even for low-order, single-medium constructs driven by a unique set of site-specific data. Quantitative assessment of integrated, multimedia models that simulate hundreds of sites...
Sufficiently elucidating uncertainty and sensitivity structures in environmental models can be a difficult task, even for low-order, single-media constructs driven by a unique set of site-specific data. The ensuing challenge of examining ever more complex, integrated, higher-ord...
PC-BASED SUPERCOMPUTING FOR UNCERTAINTY AND SENSITIVITY ANALYSIS OF MODELS
Evaluating uncertainty and sensitivity of multimedia environmental models that integrate assessments of air, soil, sediments, groundwater, and surface water is a difficult task. It can be an enormous undertaking even for simple, single-medium models (i.e. groundwater only) descr...
Technology Transfer Automated Retrieval System (TEKTRAN)
For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...
SCIENTIFIC UNCERTAINTIES IN ATMOSPHERIC MERCURY MODELS II: SENSITIVITY ANALYSIS IN THE CONUS DOMAIN
In this study, we present the response of model results to different scientific treatments in an effort to quantify the uncertainties caused by the incomplete understanding of mercury science and by model assumptions in atmospheric mercury models. Two sets of sensitivity simulati...
PRACTICAL SENSITIVITY AND UNCERTAINTY ANALYSIS TECHNIQUES APPLIED TO AGRICULTURAL SYSTEMS MODELS
Technology Transfer Automated Retrieval System (TEKTRAN)
We present a practical evaluation framework for analysis of two complex, process-based agricultural system models, WEPP and RZWQM. The evaluation framework combines sensitivity analysis and the uncertainty analysis techniques of first order error analysis (FOA) and Monte Carlo simulation with Latin ...
Technology Transfer Automated Retrieval System (TEKTRAN)
For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...
NASA Astrophysics Data System (ADS)
Dai, H.; Ye, M.
2013-12-01
Groundwater contamination has been a serious health and environmental problem in many areas over the world nowadays. Groundwater reactive transport modeling is vital to make predictions of future contaminant reactive transport. However, these predictions are inherently uncertain, and uncertainty is one of the greatest obstacles in groundwater reactive transport. We propose a Bayesian network approach for quantifying the uncertainty and implement the network for a groundwater reactive transport model for illustration. In the Bayesian network, different uncertainty sources are described as uncertain nodes. All the nodes are characterized by multiple states, representing their uncertainty, in the form of continuous or discrete probability distributions that are propagated to the model endpoint, which is the spatial distribution of contaminant concentrations. After building the Bayesian network, uncertainty quantification is conducted through Monte Carlo simulations to obtain probability distributions of the variables of interest. In this study, uncertainty sources include scenario uncertainty, model uncertainty, parameter uncertainty, and data uncertainty. Variance decomposition is used to quantify relative contribution from the various sources to predictive uncertainty. Based on the variance decomposition, the Sobol' global sensitivity index is extended from parametric uncertainty to consider model and scenario uncertainty, and individual parameter sensitivity index is estimated with consideration of multiple models and scenarios. While these new developments are illustrated using a relatively simple groundwater reactive transport model, our methods is applicable to a wide range of models. The results of uncertainty quantification and sensitivity analysis are useful for environmental management and decision-makers to formulate policies and strategies.
NASA Astrophysics Data System (ADS)
Wolfsberg, A.; Kang, Q.; Li, C.; Ruskauff, G.; Bhark, E.; Freeman, E.; Prothro, L.; Drellack, S.
2007-12-01
The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The
Sig Drellack, Lance Prothro
2007-12-01
The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The
PH Sensitive Polymers for Improving Reservoir Sweep and Conformance Control in Chemical Flooring
Mukul Sharma; Steven Bryant; Chun Huh
2008-03-31
viscoelastic behavior as functions of pH; shear rate; polymer concentration; salinity, including divalent ion effects; polymer molecular weight; and degree of hydrolysis. A comprehensive rheological model was developed for HPAM solution rheology in terms of: shear rate; pH; polymer concentration; and salinity, so that the spatial and temporal changes in viscosity during the polymer flow in the reservoir can be accurately modeled. A series of acid coreflood experiments were conducted to understand the geochemical reactions relevant for both the near-wellbore injection profile control and for conformance control applications. These experiments showed that the use hydrochloric acid as a pre-flush is not viable because of the high reaction rate with the rock. The use of citric acid as a pre-flush was found to be quite effective. This weak acid has a slow rate of reaction with the rock and can buffer the pH to below 3.5 for extended periods of time. With the citric acid pre-flush the polymer could be efficiently propagated through the core in a low pH environment i.e. at a low viscosity. The transport of various HPAM solutions was studied in sandstones, in terms of permeability reduction, mobility reduction, adsorption and inaccessible pore volume with different process variables: injection pH, polymer concentration, polymer molecular weight, salinity, degree of hydrolysis, and flow rate. Measurements of polymer effluent profiles and tracer tests show that the polymer retention increases at the lower pH. A new simulation capability to model the deep-penetrating mobility control or conformance control using pH-sensitive polymer was developed. The core flood acid injection experiments were history matched to estimate geochemical reaction rates. Preliminary scale-up simulations employing linear and radial geometry floods in 2-layer reservoir models were conducted. It is clearly shown that the injection rate of pH-sensitive polymer solutions can be significantly increased by injecting
James, Scott Carlton
2004-08-01
Given pre-existing Groundwater Modeling System (GMS) models of the Horonobe Underground Research Laboratory (URL) at both the regional and site scales, this work performs an example uncertainty analysis for performance assessment (PA) applications. After a general overview of uncertainty and sensitivity analysis techniques, the existing GMS sitescale model is converted to a PA model of the steady-state conditions expected after URL closure. This is done to examine the impact of uncertainty in site-specific data in conjunction with conceptual model uncertainty regarding the location of the Oomagari Fault. In addition, a quantitative analysis of the ratio of dispersive to advective forces, the F-ratio, is performed for stochastic realizations of each conceptual model. All analyses indicate that accurate characterization of the Oomagari Fault with respect to both location and hydraulic conductivity is critical to PA calculations. This work defines and outlines typical uncertainty and sensitivity analysis procedures and demonstrates them with example PA calculations relevant to the Horonobe URL.
Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling
NASA Astrophysics Data System (ADS)
Pastore, Giovanni; Swiler, L. P.; Hales, J. D.; Novascone, S. R.; Perez, D. M.; Spencer, B. W.; Luzzi, L.; Van Uffelen, P.; Williamson, R. L.
2015-01-01
The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code with a recently implemented physics-based model for fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information in the open literature. The study leads to an initial quantitative assessment of the uncertainty in fission gas behavior predictions with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, significantly higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.
Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling
Pastore, Giovanni; Swiler, L. P.; Hales, Jason D.; Novascone, Stephen R.; Perez, Danielle M.; Spencer, Benjamin W.; Luzzi, Lelio; Uffelen, Paul Van; Williamson, Richard L.
2014-10-12
The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code and a recently implemented physics-based model for the coupled fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information from the open literature. The study leads to an initial quantitative assessment of the uncertainty in fission gas behavior modeling with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.
Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.
2001-01-01
This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
James, Scott Carlton; Zimmerman, Dean Anthony
2003-10-01
Incorporating results from a previously developed finite element model, an uncertainty and parameter sensitivity analysis was conducted using preliminary site-specific data from Horonobe, Japan (data available from five boreholes as of 2003). Latin Hypercube Sampling was used to draw random parameter values from the site-specific measured, or approximated, physicochemical uncertainty distributions. Using pathlengths and groundwater velocities extracted from the three-dimensional, finite element flow and particle tracking model, breakthrough curves for multiple realizations were calculated with the semi-analytical, one-dimensional, multirate transport code, STAMMT-L. A stepwise linear regression analysis using the 5, 50, and 95% breakthrough times as the dependent variables and LHS sampled site physicochemical parameters as the independent variables was used to perform a sensitivity analysis. Results indicate that the distribution coefficients and hydraulic conductivities are the parameters responsible for most of the variation among simulated breakthrough times. This suggests that researchers and data collectors at the Horonobe site should focus on accurately assessing these parameters and quantifying their uncertainty. Because the Horonobe Underground Research Laboratory is in an early phase of its development, this work should be considered as a first step toward an integration of uncertainty and sensitivity analyses with decision analysis.
Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models
Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.
1987-01-01
The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case.
Eslick, John C.; Ng, Brenda; Gao, Qianwen; Tong, Charles H.; Sahinidis, Nikolaos V.; Miller, David C.
2014-12-31
Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less
Eslick, John C.; Ng, Brenda; Gao, Qianwen; Tong, Charles H.; Sahinidis, Nikolaos V.; Miller, David C.
2014-12-31
Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification through PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.
NASA Astrophysics Data System (ADS)
Sokolov, A. P.; Monier, E.; Forest, C. E.
2013-12-01
Climate sensitivity and rate of the heat uptake by the deep ocean are two main characteristics of the Climate System defining its response to a prescribed external forcing. We study relative contributions of the uncertainty in these two characteristics by means of numerical simulations with the MIT Earth System Model (MESM) of intermediate complexity. The MESM consists of a 2D (zonally averaged) atmospheric model coupled to an anomaly diffusing ocean model. Probability distributions for climate sensitivity and rate of oceanic heat uptake are obtained using available data on radiative forcing and temperature changes over 20th century. The results from three 400-member ensembles of long-term (years 1860 to 3000) climate simulations for the IPCC RCP6.0 forcing scenario will be presented. The values of climate sensitivity and rate of oceanic heat uptake, used in the first ensemble, were chosen by sampling their joint probability distribution. In the other two ensembles uncertainty in only one characteristic was taken into account, while the median value was used for the other. Results show that contribution of the uncertainty in climate sensitivity and rate of heat uptake by the deep ocean into the overall uncertainty in projected surface warming and sea level rise is time dependent. Contribution of the uncertainty in rate of heat uptake into uncertainty in the projected surface air temperature increase is rather similar to that of the uncertainty in climate sensitivity while forcing is increasing, but it becomes significantly smaller after forcing is stabilized. The magnitude of surface warming at the end of 30th century is defined almost exclusively by the climate sensitivity distribution. In contrast, uncertainty in the heat uptake has a noticeable effect on projected sea level rise for the whole period of simulations.
Sensitivities and Uncertainties Related to Numerics and Building Features in Urban Modeling
Joseph III, Robert Anthony; Slater, Charles O; Evans, Thomas M; Mosher, Scott W; Johnson, Jeffrey O
2011-01-01
Oak Ridge National Laboratory (ORNL) has been engaged in the development and testing of a computational system that would use a grid of activation foil detectors to provide postdetonation forensic information from a nuclear device detonation. ORNL has developed a high-performance, three-dimensional (3-D) deterministic radiation transport code called Denovo. Denovo solves the multigroup discrete ordinates (SN) equations and can output 3-D data in a platform-independent format that can be efficiently analyzed using parallel, high-performance visualization tools. To evaluate the sensitivities and uncertainties associated with the deterministic computational method numerics, a numerical study on the New York City Times Square model was conducted using Denovo. In particular, the sensitivities and uncertainties associated with various components of the calculational method were systematically investigated, including (a) the Legendre polynomial expansion order of the scattering cross sections, (b) the angular quadrature, (c) multigroup energy binning, (d) spatial mesh sizes, (e) the material compositions of the building models, (f) the composition of the foundations upon which the buildings rest (e.g., ground, concrete, or asphalt), and (g) the amount of detail included in the building models. Although Denovo may calculate the idealized model well, there may be uncertainty in the results because of slight departures of the above-named parameters from those used in the idealized calculations. Fluxes and activities at selected locations from perturbed calculations are compared with corresponding values from the idealized or base case to determine the sensitivities associated with specified parameter changes. Results indicate that uncertainties related to numerics can be controlled by using higher fidelity models, but more work is needed to control the uncertainties related to the model.
Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling
Pastore, Giovanni; Swiler, L. P.; Hales, Jason D.; Novascone, Stephen R.; Perez, Danielle M.; Spencer, Benjamin W.; Luzzi, Lelio; Uffelen, Paul Van; Williamson, Richard L.
2014-10-12
The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code and a recently implemented physics-based model for the coupled fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information from the open literature. The study leads to an initial quantitative assessment of the uncertaintymore » in fission gas behavior modeling with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.« less
Sensitivity and first-step uncertainty analyses for the preferential flow model MACRO.
Dubus, Igor G; Brown, Colin D
2002-01-01
Sensitivity analyses for the preferential flow model MACRO were carried out using one-at-a-time and Monte Carlo sampling approaches. Four different scenarios were generated by simulating leaching to depth of two hypothetical pesticides in a sandy loam and a more structured clay loam soil. Sensitivity of the model was assessed using the predictions for accumulated water percolated at a 1-m depth and accumulated pesticide losses in percolation. Results for simulated percolation were similar for the two soils. Predictions of water volumes percolated were found to be only marginally affected by changes in input parameters and the most influential parameter was the water content defining the boundary between micropores and macropores in this dual-porosity model. In contrast, predictions of pesticide losses were found to be dependent on the scenarios considered and to be significantly affected by variations in input parameters. In most scenarios, predictions for pesticide losses by MACRO were most influenced by parameters related to sorption and degradation. Under specific circumstances, pesticide losses can be largely affected by changes in hydrological properties of the soil. Since parameters were varied within ranges that approximated their uncertainty, a first-step assessment of uncertainty for the predictions of pesticide losses was possible. Large uncertainties in the predictions were reported, although these are likely to have been overestimated by considering a large number of input parameters in the exercise. It appears desirable that a probabilistic framework accounting for uncertainty is integrated into the estimation of pesticide exposure for regulatory purposes. PMID:11837426
Uncertainty and Sensitivity analysis of a physically-based landslide model
NASA Astrophysics Data System (ADS)
Yatheendradas, Soni; Kirschbaum, Dalia
2015-04-01
Rainfall-induced landslides are hazardous to life and property. Rain data sources like satellite remote sensors combined with physically-based models of landslide initiation are a potentially economical solution for anticipating and early warning of possible landslide activity. In this work, we explore the output uncertainty of the physically-based USGS model, TRIGRS (Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability) under both an a priori model parameter specification scenario and a model calibration scenario using a powerful stochastic optimization algorithm. We study a set of 50+ historic landslides over the Macon County in North Carolina as an example regional robust analysis. We then conduct a robust multivariate sensitivity analysis of the modeled output to various factors including rainfall forcing, initial and boundary conditions, and model parameters including topographic slope. Satellite rainfall uncertainty distributions are prescribed based on stochastic regressions to benchmark rain values at each location. Information about the most influential factors from sensitivity analysis will help to preferentially direct field work efforts towards associated observations. This will contribute to reducing output uncertainty in future modeling efforts. We also show how we can conveniently reduce model complexity considering negligibly influential factors to maintain example required levels of predictive accuracy and uncertainty.
Uncertainty and Sensitivity Analyses of a Pebble Bed HTGR Loss of Cooling Event
Strydom, Gerhard
2013-01-01
The Very High Temperature Reactor Methods Development group at the Idaho National Laboratory identified the need for a defensible and systematic uncertainty and sensitivity approach in 2009. This paper summarizes the results of an uncertainty and sensitivity quantification investigation performed with the SUSA code, utilizing the International Atomic Energy Agency CRP 5 Pebble Bed Modular Reactor benchmark and the INL code suite PEBBED-THERMIX. Eight model input parameters were selected for inclusion in this study, and after the input parameters variations and probability density functions were specified, a total of 800 steady state and depressurized loss of forced cooling (DLOFC) transientmore » PEBBED-THERMIX calculations were performed. The six data sets were statistically analyzed to determine the 5% and 95% DLOFC peak fuel temperature tolerance intervals with 95% confidence levels. It was found that the uncertainties in the decay heat and graphite thermal conductivities were the most significant contributors to the propagated DLOFC peak fuel temperature uncertainty. No significant differences were observed between the results of Simple Random Sampling (SRS) or Latin Hypercube Sampling (LHS) data sets, and use of uniform or normal input parameter distributions also did not lead to any significant differences between these data sets.« less
NASA Astrophysics Data System (ADS)
Singh, R.; Achutarao, K. M.
2014-12-01
Reliable future climate information is a necessary requirement for the scientific and policy making community. Uncertainty due to various sources affects the level of accuracy of climate change projections at different scales, it becomes even complex at regional scale. This study is an attempt to unfold the levels of uncertainty in future climate projections over the Indian Region to add value to the information on mean changes reported in Chaturvedi et al (Curr. Sci.,2012).We examine model projections of temperature and precipitation using output from the CMIP5 database.Using the 'Reliability Ensemble Averaging' (REA, Giorgi and Mearns, J. Climate, 2002) and "Upgraded REA" (Xu et al, Clim. Res.2010) methods with some modifications, we examine the uncertainty in projections for Annual, Indian Summer Monsoon (JJA) and Winter (DJF) seasons under the RCP4.5 and RCP8.5 scenarios. Both methods bring to use the principle of weighting model based projections based on objective model performance criteria - such as biases (both univariate as well as multivariate) in simulating past climate and measures of simulated variability. The sensitivity to these criteriais tested by varying the metrics and weights assigned to them. Sensitivity of metrics to observational uncertainty is also examined at regional, sub-regional and grid point levels.
NASA Astrophysics Data System (ADS)
Silva, J. M. N.; Carreiras, J. M. B.; Rosa, I.; Pereira, J. M. C.
2011-10-01
Annual emissions of CO2, CH4, CO, N2O, and NOx from biomass burning in shifting cultivation systems in tropical Asia, Africa, and America were estimated at national and continental levels as the product of area burned, aboveground biomass, combustion completeness, and emission factor. The total area of shifting cultivation in each country was derived from the Global Land Cover 2000 map, while the area cleared and burned annually was obtained by multiplying the total area by the rotation cycle of shifting cultivation, calculated using cropping and fallow lengths reported in the literature. Aboveground biomass accumulation was estimated as a function of the duration and mean temperature of the growing season, soil texture type, and length of the fallow period. The uncertainty associated with each model variable was estimated, and an uncertainty and sensitivity analysis of greenhouse gas estimates was performed with Monte Carlo and variance decomposition techniques. Our results reveal large uncertainty in emission estimates for all five gases. In the case of CO2, mean (standard deviation) emissions from shifting cultivation in Asia, Africa, and America were estimated at 241 (132), 205 (139), and 295 (197) Tg yr-1, respectively. Combustion completeness and emission factors were the model inputs that contributed the most to the uncertainty of estimates. Our mean estimates are lower than the literature values for atmospheric emission from biomass burning in shifting cultivation systems. Only mean values could be compared since other studies do not provide any measure of uncertainty.
Quantitative uncertainty and sensitivity analysis of a PWR control rod ejection accident
Pasichnyk, I.; Perin, Y.; Velkov, K.
2013-07-01
The paper describes the results of the quantitative Uncertainty and Sensitivity (U/S) Analysis of a Rod Ejection Accident (REA) which is simulated by the coupled system code ATHLET-QUABOX/CUBBOX applying the GRS tool for U/S analysis SUSA/XSUSA. For the present study, a UOX/MOX mixed core loading based on a generic PWR is modeled. A control rod ejection is calculated for two reactor states: Hot Zero Power (HZP) and 30% of nominal power. The worst cases for the rod ejection are determined by steady-state neutronic simulations taking into account the maximum reactivity insertion in the system and the power peaking factor. For the U/S analysis 378 uncertain parameters are identified and quantified (thermal-hydraulic initial and boundary conditions, input parameters and variations of the two-group cross sections). Results for uncertainty and sensitivity analysis are presented for safety important global and local parameters. (authors)
Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G.
2015-01-01
The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity, than effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g. sandy soil as compared to clayey soil, and “shallow” sources as compared to “deep” sources) are evaluated. Our results, not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive. PMID:25947051
Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G
2015-02-01
The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity than to effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g., sandy soil as compared to clayey soil, and "shallow" sources as compared to "deep" sources) are evaluated. Our results not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive. PMID:25947051
Nasif, Hesham; Neyama, Atsushi
2003-02-26
This paper presents results of an uncertainty and sensitivity analysis for performance of the different barriers of high level radioactive waste repositories. SUA is a tool to perform the uncertainty and sensitivity on the output of Wavelet Integrated Repository System model (WIRS), which is developed to solve a system of nonlinear partial differential equations arising from the model formulation of radionuclide transport through repository. SUA performs sensitivity analysis (SA) and uncertainty analysis (UA) on a sample output from Monte Carlo simulation. The sample is generated by WIRS and contains the values of the output values of the maximum release rate in the form of time series and values of the input variables for a set of different simulations (runs), which are realized by varying the model input parameters. The Monte Carlo sample is generated with SUA as a pure random sample or using Latin Hypercube sampling technique. Tchebycheff and Kolmogrov confidence bounds are compute d on the maximum release rate for UA and effective non-parametric statistics to rank the influence of the model input parameters SA. Based on the results, we point out parameters that have primary influences on the performance of the engineered barrier system of a repository. The parameters found to be key contributor to the release rate are selenium and Cesium distribution coefficients in both geosphere and major water conducting fault (MWCF), the diffusion depth and water flow rate in the excavation-disturbed zone (EDZ).
Li, W B; Hoeschen, C
2010-01-01
Mathematical models for kinetics of radiopharmaceuticals in humans were developed and are used to estimate the radiation absorbed dose for patients in nuclear medicine by the International Commission on Radiological Protection and the Medical Internal Radiation Dose (MIRD) Committee. However, due to the fact that the residence times used were derived from different subjects, partially even with different ethnic backgrounds, a large variation in the model parameters propagates to a high uncertainty of the dose estimation. In this work, a method was developed for analysing the uncertainty and sensitivity of biokinetic models that are used to calculate the residence times. The biokinetic model of (18)F-FDG (FDG) developed by the MIRD Committee was analysed by this developed method. The sources of uncertainty of all model parameters were evaluated based on the experiments. The Latin hypercube sampling technique was used to sample the parameters for model input. Kinetic modelling of FDG in humans was performed. Sensitivity of model parameters was indicated by combining the model input and output, using regression and partial correlation analysis. The transfer rate parameter of plasma to other tissue fast is the parameter with the greatest influence on the residence time of plasma. Optimisation of biokinetic data acquisition in the clinical practice by exploitation of the sensitivity of model parameters obtained in this study is discussed. PMID:20185457
NASA Astrophysics Data System (ADS)
Bonadonna, Costanza; Biass, Sébastien; Costa, Antonio
2015-04-01
Regardless of the recent advances in geophysical monitoring and real-time quantitative observations of explosive volcanic eruptions, the characterization of tephra deposits remains one of the largest sources of information on Eruption Source Parameters (ESPs) (i.e. plume height, erupted volume/mass, Mass Eruption Rate - MER, eruption duration, Total Grain-Size Distribution - TGSD). ESPs are crucial for the characterization of volcanic systems and for the compilation of comprehensive hazard scenarios but are naturally associated with various degrees of uncertainties that are traditionally not well quantified. Recent studies have highlighted the uncertainties associated with the estimation of ESPs mostly related to: i) the intrinsic variability of the natural system, ii) the observational error and iii) the strategies used to determine physical parameters. Here we review recent studies focused on the characterization of these uncertainties and we present a sensitivity analysis for the determination of ESPs and a systematic investigation to quantify the propagation of uncertainty applied to two case studies. In particular, we highlight the dependence of ESPs on specific observations used as input parameters (i.e. diameter of the largest clasts, thickness measurements, area of isopach contours, deposit density, downwind and crosswind range of isopleth maps, and empirical constants and wind speed for the determination of MER). The highest uncertainty is associated to the estimation of MER and eruption duration and is related to the determination of crosswind range of isopleth maps and the empirical constants used in the empirical parameterization relating MER and plume height. Given the exponential nature of the relation between MER and plume height, the propagation of uncertainty is not symmetrical, and both an underestimation of the empirical constant and an overestimation of plume height have the highest impact on the final outcome. A ± 20% uncertainty on thickness
A guide to uncertainty quantification and sensitivity analysis for cardiovascular applications.
Eck, Vinzenz Gregor; Donders, Wouter Paulus; Sturdy, Jacob; Feinberg, Jonathan; Delhaas, Tammo; Hellevik, Leif Rune; Huberts, Wouter
2016-08-01
As we shift from population-based medicine towards a more precise patient-specific regime guided by predictions of verified and well-established cardiovascular models, an urgent question arises: how sensitive are the model predictions to errors and uncertainties in the model inputs? To make our models suitable for clinical decision-making, precise knowledge of prediction reliability is of paramount importance. Efficient and practical methods for uncertainty quantification (UQ) and sensitivity analysis (SA) are therefore essential. In this work, we explain the concepts of global UQ and global, variance-based SA along with two often-used methods that are applicable to any model without requiring model implementation changes: Monte Carlo (MC) and polynomial chaos (PC). Furthermore, we propose a guide for UQ and SA according to a six-step procedure and demonstrate it for two clinically relevant cardiovascular models: model-based estimation of the fractional flow reserve (FFR) and model-based estimation of the total arterial compliance (CT ). Both MC and PC produce identical results and may be used interchangeably to identify most significant model inputs with respect to uncertainty in model predictions of FFR and CT . However, PC is more cost-efficient as it requires an order of magnitude fewer model evaluations than MC. Additionally, we demonstrate that targeted reduction of uncertainty in the most significant model inputs reduces the uncertainty in the model predictions efficiently. In conclusion, this article offers a practical guide to UQ and SA to help move the clinical application of mathematical models forward. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26475178
Uncertainty Quantification and Sensitivity Analysis in the CICE v5.1 Sea Ice Model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, J. R.; Urban, N. M.
2015-12-01
Changes in the high latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with mid latitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. In this work we characterize parametric uncertainty in Los Alamos Sea Ice model (CICE) and quantify the sensitivity of sea ice area, extent and volume with respect to uncertainty in about 40 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one-at-a-time, this study uses a global variance-based approach in which Sobol sequences are used to efficiently sample the full 40-dimensional parameter space. This approach requires a very large number of model evaluations, which are expensive to run. A more computationally efficient approach is implemented by training and cross-validating a surrogate (emulator) of the sea ice model with model output from 400 model runs. The emulator is used to make predictions of sea ice extent, area, and volume at several model configurations, which are then used to compute the Sobol sensitivity indices of the 40 parameters. A ranking based on the sensitivity indices indicates that model output is most sensitive to snow parameters such as conductivity and grain size, and the drainage of melt ponds. The main effects and interactions among the most influential parameters are also estimated by a non-parametric regression technique based on generalized additive models. It is recommended research to be prioritized towards more accurately determining these most influential parameters values by observational studies or by improving existing parameterizations in the sea ice model.
Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.
Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A
2013-02-01
The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on
Is the Smagorinsky coefficient sensitive to uncertainty in the form of the energy spectrum?
NASA Astrophysics Data System (ADS)
Meldi, M.; Lucor, D.; Sagaut, P.
2011-12-01
We investigate the influence of uncertainties in the shape of the energy spectrum over the Smagorinsky ["General circulation experiments with the primitive equations. I: The basic experiment," Mon. Weather Rev. 91(3), 99 (1963)] subgrid scale model constant CS: the analysis is carried out by a stochastic approach based on generalized polynomial chaos. The free parameters in the considered energy spectrum functional forms are modeled as random variables over bounded supports: two models of the energy spectrum are investigated, namely, the functional form proposed by Pope [Turbulent Flows (Cambridge University Press, Cambridge, 2000)] and by Meyers and Meneveau ["A functional form for the energy spectrum parametrizing bottleneck and intermittency effects," Phys. Fluids 20(6), 065109 (2008)]. The Smagorinsky model coefficient, computed from the algebraic relation presented in a recent work by Meyers and Sagaut ["On the model coefficients for the standard and the variational multi-scale Smagorinsky model," J. Fluid Mech. 569, 287 (2006)], is considered as a stochastic process and is described by numerical tools streaming from the probability theory. The uncertainties are introduced in the free parameters shaping the energy spectrum in correspondence to the large and the small scales, respectively. The predicted model constant is weakly sensitive to the shape of the energy spectrum when large scales uncertainty is considered: if the large-eddy simulation (LES) filter cut is performed in the inertial range, a significant probability to recover values lower in magnitude than the asymptotic Lilly-Smagorinsky model constant is recovered. Furthermore, the predicted model constant occurrences cluster in a compact range of values: the correspondent probability density function rapidly drops to zero approaching the extremes values of the range, which show a significant sensitivity to the LES filter width. The sensitivity of the model constant to uncertainties propagated in the
NASA Astrophysics Data System (ADS)
Sun, Mei; Zhang, Xiaolin; Huo, Zailin; Feng, Shaoyuan; Huang, Guanhua; Mao, Xiaomin
2016-03-01
Quantitatively ascertaining and analyzing the effects of model uncertainty on model reliability is a focal point for agricultural-hydrological models due to more uncertainties of inputs and processes. In this study, the generalized likelihood uncertainty estimation (GLUE) method with Latin hypercube sampling (LHS) was used to evaluate the uncertainty of the RZWQM-DSSAT (RZWQM2) model outputs responses and the sensitivity of 25 parameters related to soil properties, nutrient transport and crop genetics. To avoid the one-sided risk of model prediction caused by using a single calibration criterion, the combined likelihood (CL) function integrated information concerning water, nitrogen, and crop production was introduced in GLUE analysis for the predictions of the following four model output responses: the total amount of water content (T-SWC) and the nitrate nitrogen (T-NIT) within the 1-m soil profile, the seed yields of waxy maize (Y-Maize) and winter wheat (Y-Wheat). In the process of evaluating RZWQM2, measurements and meteorological data were obtained from a field experiment that involved a winter wheat and waxy maize crop rotation system conducted from 2003 to 2004 in southern Beijing. The calibration and validation results indicated that RZWQM2 model can be used to simulate the crop growth and water-nitrogen migration and transformation in wheat-maize crop rotation planting system. The results of uncertainty analysis using of GLUE method showed T-NIT was sensitive to parameters relative to nitrification coefficient, maize growth characteristics on seedling period, wheat vernalization period, and wheat photoperiod. Parameters on soil saturated hydraulic conductivity, nitrogen nitrification and denitrification, and urea hydrolysis played an important role in crop yield component. The prediction errors for RZWQM2 outputs with CL function were relatively lower and uniform compared with other likelihood functions composed of individual calibration criterion. This
Risk-sensitive optimal feedback control accounts for sensorimotor behavior under uncertainty.
Nagengast, Arne J; Braun, Daniel A; Wolpert, Daniel M
2010-01-01
Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models. PMID:20657657
Risk-Sensitive Optimal Feedback Control Accounts for Sensorimotor Behavior under Uncertainty
Nagengast, Arne J.; Braun, Daniel A.; Wolpert, Daniel M.
2010-01-01
Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models. PMID:20657657
1996-11-01
The objectives of the research are to evaluate and calculate the sensitivities and uncertainties that exist in model calculations of atmospheric ozone levels as the result of uncertainties associated with the chemical kinetics and photolysis parameterizations used in the mechanisms and codes. Photochemistry and heterogeneous kinetics are to be included. SRI`s approach uses the Chemkin/Senkin codes from Sandia National Laboratories, which are public software incorporating the latest algorithms for the direct, efficient calculation of the sensitivity coefficients. These codes provide full sets of concentration derivatives with respect to individual rare constants, temperatures, and other species concentration. Full zero-dimensional, time-resolved calculations may thus be performed over a matrix of initial conditions (temperature, pressure, concentration and radiation) representative of the range of stratospheric and tropospheric environments. Conditions, parameters, and concentration are initially obtained from two-dimensional model outputs obtained from colleagues at Lawrence Livermore National Laboratory (LLNL). These results are used mathematically propagate our expert evaluation of the errors associated with individual rate constants to derive uncertainty estimates for the model calculations.
NASA Astrophysics Data System (ADS)
Kavetski, D.; Clark, M. P.; Fenicia, F.
2011-12-01
Hydrologists often face sources of uncertainty that dwarf those normally encountered in many engineering and scientific disciplines. Especially when representing large scale integrated systems, internal heterogeneities such as stream networks, preferential flowpaths, vegetation, etc, are necessarily represented with a considerable degree of lumping. The inputs to these models are themselves often the products of sparse observational networks. Given the simplifications inherent in environmental models, especially lumped conceptual models, does it really matter how they are implemented? At the same time, given the complexities usually found in the response surfaces of hydrological models, increasingly sophisticated analysis methodologies are being proposed for sensitivity analysis, parameter calibration and uncertainty assessment. Quite remarkably, rather than being caused by the model structure/equations themselves, in many cases model analysis complexities are consequences of seemingly trivial aspects of the model implementation - often, literally, whether the start-of-step or end-of-step fluxes are used! The extent of problems can be staggering, including (i) degraded performance of parameter optimization and uncertainty analysis algorithms, (ii) erroneous and/or misleading conclusions of sensitivity analysis, parameter inference and model interpretations and, finally, (iii) poor reliability of a calibrated model in predictive applications. While the often nontrivial behavior of numerical approximations has long been recognized in applied mathematics and in physically-oriented fields of environmental sciences, it remains a problematic issue in many environmental modeling applications. Perhaps detailed attention to numerics is only warranted for complicated engineering models? Would not numerical errors be an insignificant component of total uncertainty when typical data and model approximations are present? Is this really a serious issue beyond some rare isolated
NASA Astrophysics Data System (ADS)
Scott, M. J.; Daly, D.; McJeon, H.; Zhou, Y.; Clarke, L.; Rice, J.; Whitney, P.; Kim, S.
2012-12-01
Residential and commercial buildings are a major source of energy consumption and carbon dioxide emissions in the United States, accounting for 41% of energy consumption and 40% of carbon emissions in 2011. Integrated assessment models (IAMs) historically have been used to estimate the impact of energy consumption on greenhouse gas emissions at the national and international level. Increasingly they are being asked to evaluate mitigation and adaptation policies that have a subnational dimension. In the United States, for example, building energy codes are adopted and enforced at the state and local level. Adoption of more efficient appliances and building equipment is sometimes directed or actively promoted by subnational governmental entities for mitigation or adaptation to climate change. The presentation reports on new example results from the Global Change Assessment Model (GCAM) IAM, one of a flexibly-coupled suite of models of human and earth system interactions known as the integrated Regional Earth System Model (iRESM) system. iRESM can evaluate subnational climate policy in the context of the important uncertainties represented by national policy and the earth system. We have added a 50-state detailed U.S. building energy demand capability to GCAM that is sensitive to national climate policy, technology, regional population and economic growth, and climate. We are currently using GCAM in a prototype stakeholder-driven uncertainty characterization process to evaluate regional climate mitigation and adaptation options in a 14-state pilot region in the U.S. upper Midwest. The stakeholder-driven decision process involves several steps, beginning with identifying policy alternatives and decision criteria based on stakeholder outreach, identifying relevant potential uncertainties, then performing sensitivity analysis, characterizing the key uncertainties from the sensitivity analysis, and propagating and quantifying their impact on the relevant decisions. In the
NASA Astrophysics Data System (ADS)
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
Parameter uncertainty, sensitivity, and sediment coupling in bioenergetics-based food web models
Barron, M.G.; Cacela, D.; Beltman, D.
1995-12-31
A bioenergetics-based food web model was developed and calibrated using measured PCB water and sediment concentrations in two Great Lakes food webs: Green Bay, Michigan and Lake Ontario. The model incorporated functional based trophic levels and sediment, water, and food chain exposures of PCBs to aquatic biota. Sensitivity analysis indicated the parameters with the greatest influence on PCBs in top predators were lipid content of plankton and benthos, planktivore assimilation efficiency, Kow, prey selection, and ambient temperature. Sediment-associated PCBs were estimated to contribute over 90% of PCBs in benthivores and less than 50% in piscivores. Ranges of PCB concentrations in top predators estimated by Monte Carlo simulation incorporating parameter uncertainty were within one order of magnitude of modal values. Model applications include estimation of exceedences of human and ecological thresholds. The results indicate that point estimates from bioenergetics-based food web models have substantial uncertainty that should be considered in regulatory and scientific applications.
Adjoint-based uncertainty quantification and sensitivity analysis for reactor depletion calculations
NASA Astrophysics Data System (ADS)
Stripling, Hayes Franklin
Depletion calculations for nuclear reactors model the dynamic coupling between the material composition and neutron flux and help predict reactor performance and safety characteristics. In order to be trusted as reliable predictive tools and inputs to licensing and operational decisions, the simulations must include an accurate and holistic quantification of errors and uncertainties in its outputs. Uncertainty quantification is a formidable challenge in large, realistic reactor models because of the large number of unknowns and myriad sources of uncertainty and error. We present a framework for performing efficient uncertainty quantification in depletion problems using an adjoint approach, with emphasis on high-fidelity calculations using advanced massively parallel computing architectures. This approach calls for a solution to two systems of equations: (a) the forward, engineering system that models the reactor, and (b) the adjoint system, which is mathematically related to but different from the forward system. We use the solutions of these systems to produce sensitivity and error estimates at a cost that does not grow rapidly with the number of uncertain inputs. We present the framework in a general fashion and apply it to both the source-driven and k-eigenvalue forms of the depletion equations. We describe the implementation and verification of solvers for the forward and ad- joint equations in the PDT code, and we test the algorithms on realistic reactor analysis problems. We demonstrate a new approach for reducing the memory and I/O demands on the host machine, which can be overwhelming for typical adjoint algorithms. Our conclusion is that adjoint depletion calculations using full transport solutions are not only computationally tractable, they are the most attractive option for performing uncertainty quantification on high-fidelity reactor analysis problems.
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-01-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights. PMID:25843987
NASA Astrophysics Data System (ADS)
Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.
2002-05-01
Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more
NASA Astrophysics Data System (ADS)
Chiao, T.; Nijssen, B.; Stickel, L.; Lettenmaier, D. P.
2013-12-01
Hydrologic modeling is often used to assess the potential impacts of climate change on water availability and quality. A common approach in these studies is to calibrate the selected model(s) to reproduce historic stream flows prior to the application of future climate projections. This approach relies on the implicit assumptions that the sensitivities of these models to meteorological fluctuations will remain relatively constant under climate change and that these sensitivities are similar among models if all models are calibrated to the same historic record. However, even if the models are able to capture the historic variability in hydrological variables, differences in model structure and parameter estimation contribute to the uncertainties in projected runoff, which confounds the incorporation of these results into water resource management decision-making. A better understanding of the variability in hydrologic sensitivities between different models can aid in bounding this uncertainty. In this research, we characterized the hydrologic sensitivities of three watershed-scale land surface models through a case study of the Bull Run watershed in Northern Oregon. The Distributed Hydrology Soil Vegetation Model (DHSVM), Precipitation-Runoff Modeling System (PRMS), and Variable Infiltration Capacity model (VIC) were implemented and calibrated individually to historic streamflow using a common set of long-term, gridded forcings. In addition to analyzing model performances for a historic period, we quantified the temperature sensitivity (defined as change in runoff in response to change in temperature) and precipitation elasticity (defined as change in runoff in response to change in precipitation) of these three models via perturbation of the historic climate record using synthetic experiments. By comparing how these three models respond to changes in climate forcings, this research aims to test the assumption of constant and similar hydrologic sensitivities. Our
Methods in Use for Sensitivity Analysis, Uncertainty Evaluation, and Target Accuracy Assessment
G. Palmiotti; M. Salvatores; G. Aliberti
2007-10-01
Sensitivity coefficients can be used for different objectives like uncertainty estimates, design optimization, determination of target accuracy requirements, adjustment of input parameters, and evaluations of the representativity of an experiment with respect to a reference design configuration. In this paper the theory, based on the adjoint approach, that is implemented in the ERANOS fast reactor code system is presented along with some unique tools and features related to specific types of problems as is the case for nuclide transmutation, reactivity loss during the cycle, decay heat, neutron source associated to fuel fabrication, and experiment representativity.
NASA Astrophysics Data System (ADS)
McKinney, S. W.
2015-12-01
Effectiveness of uncertainty quantification (UQ) and sensitivity analysis (SA) has been improved in ASCEM by choosing from a variety of methods to best suit each model. Previously, ASCEM had a small toolset for UQ and SA, leaving out benefits of the many unincluded methods. Many UQ and SA methods are useful for analyzing models with specific characteristics; therefore, programming these methods into ASCEM would have been inefficient. Embedding the R programming language into ASCEM grants access to a plethora of UQ and SA methods. As a result, programming required is drastically decreased, and runtime efficiency and analysis effectiveness are increased relative to each unique model.
An approach for conducting PM source apportionment will be developed, tested, and applied that directly addresses limitations in current SA methods, in particular variability, biases, and intensive resource requirements. Uncertainties in SA results and sensitivities to SA inpu...
Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities
NASA Astrophysics Data System (ADS)
Esposito, Gaetano
Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and
Parameter sensitivity and uncertainty analysis for a storm surge and wave model
NASA Astrophysics Data System (ADS)
Bastidas, L. A.; Knighton, J.; Kline, S. W.
2015-10-01
Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991) utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland). The sensitive model parameters (of eleven total considered) include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters and depth-induced breaking αB) and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large amount of interactions between parameters and a non-linear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.
Helton, J.C.; Johnson, J.D.; McKay, M.D.; Shiver, A.W.; Sprung, J.L.
1995-01-01
Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the early health effects associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 34 imprecisely known input variables on the following reactor accident consequences are studied: number of early fatalities, number of cases of prodromal vomiting, population dose within 10 mi of the reactor, population dose within 1000 mi of the reactor, individual early fatality probability within 1 mi of the reactor, and maximum early fatality distance. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: scaling factor for horizontal dispersion, dry deposition velocity, inhalation protection factor for nonevacuees, groundshine shielding factor for nonevacuees, early fatality hazard function alpha value for bone marrow exposure, and scaling factor for vertical dispersion.
Energy Science and Technology Software Center (ESTSC)
2008-05-22
Version 01 SUSD3D 2008 calculates sensitivity coefficients and standard deviation in the calculated detector responses or design parameters of interest due to input cross sections and their uncertainties. One-, two- and three-dimensional transport problems can be studied. Several types of uncertainties can be considered, i.e. those due to (1) neutron/gamma multi-group cross sections, (2) energy-dependent response functions, (3) secondary angular distribution (SAD) or secondary energy distribution (SED) uncertainties. SUSD3D, initially released in 2000, is looselymore » based on the SUSD code by K. Furuta, Y. Oka and S. Kondo from the University of Tokyo in Japan. SUSD 2008 modifications are primarily relevant for the sensitivity calculations of the critical systems and include: o Correction of the sensitivity calculation for prompt fission and number of delayed neutrons per fission (MT=18 and MT=455). o An option allows the re-normalization of the prompt fission spectra covariance matrices to be applied via the "normalization" of the sensitivity profiles. This option is useful in case if the fission spectra covariances (MF=35) used do not comply with the ENDF-6 Format Manual rules. o For the criticality calculations the normalization can be calculated by the code SUSD3D internally. Parameter NORM should be set to 0 in this case. Total number of neutrons per fission (MT=452) sensitivities for all the fissile materials must be requested in the SUSD3D OVERLAY-2 input deck in order to allow the correct normalization. o The cross section data format reading was updated, mostly for critical systems (e.g. MT18 reaction). o Fission spectra uncertainties can be calculated using the file MF35 data processed by the ERROR-J code. o Cross sections can be input directly using input card "xs" (vector data only). o k-eff card was added for subcritical systems. o This version of SUSD3D code is compatible with the single precision DANTSYS code package (CCC-0547/07 and /08, which
Sensitivity of CO2 migration estimation on reservoir temperature and pressure uncertainty
Jordan, Preston; Doughty, Christine
2008-11-01
The density and viscosity of supercritical CO{sub 2} are sensitive to pressure and temperature (PT) while the viscosity of brine is sensitive primarily to temperature. Oil field PT data in the vicinity of WESTCARB's Phase III injection pilot test site in the southern San Joaquin Valley, California, show a range of PT values, indicating either PT uncertainty or variability. Numerical simulation results across the range of likely PT indicate brine viscosity variation causes virtually no difference in plume evolution and final size, but CO{sub 2} density variation causes a large difference. Relative ultimate plume size is almost directly proportional to the relative difference in brine and CO{sub 2} density (buoyancy flow). The majority of the difference in plume size occurs during and shortly after the cessation of injection.
Third Floor Plan, Second Floor Plan, First Floor Plan, Ground ...
Third Floor Plan, Second Floor Plan, First Floor Plan, Ground Floor Plan, West Bunkhouse - Kennecott Copper Corporation, On Copper River & Northwestern Railroad, Kennicott, Valdez-Cordova Census Area, AK
NASA Technical Reports Server (NTRS)
Ruane, Alex C.; Cecil, L. Dewayne; Horton, Radley M.; Gordon, Roman; McCollum, Raymond (Brown, Douglas); Brown, Douglas; Killough, Brian; Goldberg, Richard; Greeley, Adam P.; Rosenzweig, Cynthia
2011-01-01
We present results from a pilot project to characterize and bound multi-disciplinary uncertainties around the assessment of maize (Zea mays) production impacts using the CERES-Maize crop model in a climate-sensitive region with a variety of farming systems (Panama). Segunda coa (autumn) maize yield in Panama currently suffers occasionally from high water stress at the end of the growing season, however under future climate conditions warmer temperatures accelerate crop maturation and elevated CO (sub 2) concentrations improve water retention. This combination reduces end-of-season water stresses and eventually leads to small mean yield gains according to median projections, although accelerated maturation reduces yields in seasons with low water stresses. Calibrations of cultivar traits, soil profile, and fertilizer amounts are most important for representing baseline yields, however sensitivity to all management factors is reduced in an assessment of future yield changes (most dramatically for fertilizers), suggesting that yield changes may be more generalizable than absolute yields. Uncertainty around General Circulation Model (GCM)s' projected changes in rainfall gain in importance throughout the century, with yield changes strongly correlated with growing season rainfall totals. Climate changes are expected to be obscured by the large inter-annual variations in Panamanian climate that will continue to be the dominant influence on seasonal maize yield into the coming decades. The relatively high (A2) and low (B1) emissions scenarios show little difference in their impact on future maize yields until the end of the century. Uncertainties related to the sensitivity of CERES-Maize to carbon dioxide concentrations have a substantial influence on projected changes, and remain a significant obstacle to climate change impacts assessment. Finally, an investigation into the potential of simple statistical yield emulators based upon key climate variables characterizes the
Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis
Perkó, Zoltán Gilli, Luca Lathouwers, Danny Kloosterman, Jan Leen
2014-03-01
The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work is focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both
Assessing model sensitivity and uncertainty across multiple Free-Air CO2 Enrichment experiments.
NASA Astrophysics Data System (ADS)
Cowdery, E.; Dietze, M.
2015-12-01
As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentrations are highly variable and contain a considerable amount of uncertainty. It is necessary that we understand which factors are driving this uncertainty. The Free-Air CO2 Enrichment (FACE) experiments have equipped us with a rich data source that can be used to calibrate and validate these model predictions. To identify and evaluate the assumptions causing inter-model differences we performed model sensitivity and uncertainty analysis across ambient and elevated CO2 treatments using the Data Assimilation Linked Ecosystem Carbon (DALEC) model and the Ecosystem Demography Model (ED2), two process-based models ranging from low to high complexity respectively. These modeled process responses were compared to experimental data from the Kennedy Space Center Open Top Chamber Experiment, the Nevada Desert Free Air CO2 Enrichment Facility, the Rhinelander FACE experiment, the Wyoming Prairie Heating and CO2 Enrichment Experiment, the Duke Forest Face experiment and the Oak Ridge Experiment on CO2 Enrichment. By leveraging data access proxy and data tilling services provided by the BrownDog data curation project alongside analysis modules available in the Predictive Ecosystem Analyzer (PEcAn), we produced automated, repeatable benchmarking workflows that are generalized to incorporate different sites and ecological models. Combining the observed patterns of uncertainty between the two models with results of the recent FACE-model data synthesis project (FACE-MDS) can help identify which processes need further study and additional data constraints. These findings can be used to inform future experimental design and in turn can provide informative starting point for data assimilation.
Sensitivity, Prediction Uncertainty, and Detection Limit for Artificial Neural Network Calibrations.
Allegrini, Franco; Olivieri, Alejandro C
2016-08-01
With the proliferation of multivariate calibration methods based on artificial neural networks, expressions for the estimation of figures of merit such as sensitivity, prediction uncertainty, and detection limit are urgently needed. This would bring nonlinear multivariate calibration methodologies to the same status as the linear counterparts in terms of comparability. Currently only the average prediction error or the ratio of performance to deviation for a test sample set is employed to characterize and promote neural network calibrations. It is clear that additional information is required. We report for the first time expressions that easily allow one to compute three relevant figures: (1) the sensitivity, which turns out to be sample-dependent, as expected, (2) the prediction uncertainty, and (3) the detection limit. The approach resembles that employed for linear multivariate calibration, i.e., partial least-squares regression, specifically adapted to neural network calibration scenarios. As usual, both simulated and real (near-infrared) spectral data sets serve to illustrate the proposal. PMID:27363813
NASA Astrophysics Data System (ADS)
Zio, Enrico; Apostolakis, George E.
1999-03-01
This paper illustrates an application of sensitivity and uncertainty analysis techniques within a methodology for evaluating environmental restoration technologies. The methodology consists of two main parts: the first part ("analysis") integrates a wide range of decision criteria and impact evaluation techniques in a framework that emphasizes and incorporates input from stakeholders in all aspects of the process. Its products are the rankings of the alternative options for each stakeholder using, essentially, expected utility theory. The second part ("deliberation") utilizes the analytical results of the "analysis" and attempts to develop consensus among the stakeholders in a session in which the stakeholders discuss and evaluate the analytical results. This paper deals with the analytical part of the approach and the uncertainty and sensitivity analyses that were carried out in preparation for the deliberative process. The objective of these investigations was that of testing the robustness of the assessments and of pointing out possible existing sources of disagreements among the participating stakeholders, thus providing insights for the successive deliberative process. Standard techniques, such as differential analysis, Monte Carlo sampling and a two-dimensional policy region analysis proved sufficient for the task.
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Simmons, Craig T.
2015-01-01
Real world models of seawater intrusion (SWI) require high computational efforts. This creates computational difficulties for the uncertainty propagation (UP) analysis of these models due the need for repeated numerical simulations in order to adequately capture the underlying statistics that describe the uncertainty in model outputs. Moreover, despite the obvious advantages of moment-independent global sensitivity analysis (SA) methods, these methods have rarely been employed for SWI and other complex groundwater models. The reason is that moment-independent global SA methods involve repeated UP analysis which further becomes computationally demanding. This study proposes the use of non-intrusive polynomial chaos expansions (PCEs) as a means to significantly accelerate UP analysis in SWI numerical modeling studies and shows that despite the highly non-linear and non-smooth input/output relationship that exists in SWI models, non-intrusive PCEs provide a reliable and yet computationally efficient surrogate of the original numerical model. The study illustrates that for the considered two and six dimensional UP problems, PCEs offer a more accurate estimation of the statistics describing the uncertainty in model outputs compared to Monte Carlo simulations based on the original numerical model. This study also shows that the use of non-intrusive PCEs in the estimation of the moment-independent sensitivity indices (i.e. delta indices) decreases the computational time by several orders of magnitude without causing significant loss of accuracy. The use of non-intrusive PCEs for the generation of SWI hazard maps is proposed to extend the practical applications of UP analysis in coastal aquifer management studies.
de Moel, Hans; Bouwer, Laurens M; Aerts, Jeroen C J H
2014-03-01
A central tool in risk management is the exceedance-probability loss (EPL) curve, which denotes the probabilities of damages being exceeded or equalled. These curves are used for a number of purposes, including the calculation of the expected annual damage (EAD), a common indicator for risk. The model calculations that are used to create such a curve contain uncertainties that accumulate in the end result. As a result, EPL curves and EAD calculations are also surrounded by uncertainties. Knowledge of the magnitude and source of these uncertainties helps to improve assessments and leads to better informed decisions. This study, therefore, performs uncertainty and sensitivity analyses for a dike-ring area in the Netherlands, on the south bank of the river Meuse. In this study, a Monte Carlo framework is used that combines hydraulic boundary conditions, a breach growth model, an inundation model, and a damage model. It encompasses the modelling of thirteen potential breach locations and uncertainties related to probability, duration of the flood wave, height of the flood wave, erodibility of the embankment, damage curves, and the value of assets at risk. The assessment includes uncertainty and sensitivity of risk estimates for each individual location, as well as the dike-ring area as a whole. The results show that for the dike ring in question, EAD estimates exhibit a 90% percentile range from about 8 times lower than the median, up to 4.5 times higher than the median. This level of uncertainty can mainly be attributed to uncertainty in depth-damage curves, uncertainty in the probability of a flood event and the duration of the flood wave. There are considerable differences between breach locations, both in the magnitude of the uncertainty, and in its source. This indicates that local characteristics have a considerable impact on uncertainty and sensitivity of flood damage and risk calculations. PMID:24370697
ERIC Educational Resources Information Center
Uljarevic, Mirko; Carrington, Sarah; Leekam, Susan
2016-01-01
This study examined the relations between anxiety and individual characteristics of sensory sensitivity (SS) and intolerance of uncertainty (IU) in mothers of children with ASD. The mothers of 50 children completed the Hospital Anxiety and Depression Scale, the Highly Sensitive Person Scale and the IU Scale. Anxiety was associated with both SS and…
Helton, J.C.; Johnson, J.D.; Rollstin, J.A.; Shiver, A.W.; Sprung, J.L.
1995-01-01
Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the food pathways associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 87 imprecisely-known input variables on the following reactor accident consequences are studied: crop growing season dose, crop long-term dose, milk growing season dose, total food pathways dose, total ingestion pathways dose, total long-term pathways dose, area dependent cost, crop disposal cost, milk disposal cost, condemnation area, crop disposal area and milk disposal area. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: fraction of cesium deposition on grain fields that is retained on plant surfaces and transferred directly to grain, maximum allowable ground concentrations of Cs-137 and Sr-90 for production of crops, ground concentrations of Cs-134, Cs-137 and I-131 at which the disposal of milk will be initiated due to accidents that occur during the growing season, ground concentrations of Cs-134, I-131 and Sr-90 at which the disposal of crops will be initiated due to accidents that occur during the growing season, rate of depletion of Cs-137 and Sr-90 from the root zone, transfer of Sr-90 from soil to legumes, transfer of Cs-137 from soil to pasture, transfer of cesium from animal feed to meat, and the transfer of cesium, iodine and strontium from animal feed to milk.
Feizizadeh, Bakhtiar; Blaschke, Thomas
2014-01-01
GIS-based multicriteria decision analysis (MCDA) methods are increasingly being used in landslide susceptibility mapping. However, the uncertainties that are associated with MCDA techniques may significantly impact the results. This may sometimes lead to inaccurate outcomes and undesirable consequences. This article introduces a new GIS-based MCDA approach. We illustrate the consequences of applying different MCDA methods within a decision-making process through uncertainty analysis. Three GIS-MCDA methods in conjunction with Monte Carlo simulation (MCS) and Dempster–Shafer theory are analyzed for landslide susceptibility mapping (LSM) in the Urmia lake basin in Iran, which is highly susceptible to landslide hazards. The methodology comprises three stages. First, the LSM criteria are ranked and a sensitivity analysis is implemented to simulate error propagation based on the MCS. The resulting weights are expressed through probability density functions. Accordingly, within the second stage, three MCDA methods, namely analytical hierarchy process (AHP), weighted linear combination (WLC) and ordered weighted average (OWA), are used to produce the landslide susceptibility maps. In the third stage, accuracy assessments are carried out and the uncertainties of the different results are measured. We compare the accuracies of the three MCDA methods based on (1) the Dempster–Shafer theory and (2) a validation of the results using an inventory of known landslides and their respective coverage based on object-based image analysis of IRS-ID satellite images. The results of this study reveal that through the integration of GIS and MCDA models, it is possible to identify strategies for choosing an appropriate method for LSM. Furthermore, our findings indicate that the integration of MCDA and MCS can significantly improve the accuracy of the results. In LSM, the AHP method performed best, while the OWA reveals better performance in the reliability assessment. The WLC
NASA Astrophysics Data System (ADS)
Masterlark, Timothy; Donovan, Theodore; Feigl, Kurt L.; Haney, Matthew; Thurber, Clifford H.; Tung, Sui
2016-04-01
The eruption cycle of a volcano is controlled in part by the upward migration of magma. The characteristics of the magma flux produce a deformation signature at the Earth's surface. Inverse analyses use geodetic data to estimate strategic controlling parameters that describe the position and pressurization of a magma chamber at depth. The specific distribution of material properties controls how observed surface deformation translates to source parameter estimates. Seismic tomography models describe the spatial distributions of material properties that are necessary for accurate models of volcano deformation. This study investigates how uncertainties in seismic tomography models propagate into variations in the estimates of volcano deformation source parameters inverted from geodetic data. We conduct finite element model-based nonlinear inverse analyses of interferometric synthetic aperture radar (InSAR) data for Okmok volcano, Alaska, as an example. We then analyze the estimated parameters and their uncertainties to characterize the magma chamber. Analyses are performed separately for models simulating a pressurized chamber embedded in a homogeneous domain as well as for a domain having a heterogeneous distribution of material properties according to seismic tomography. The estimated depth of the source is sensitive to the distribution of material properties. The estimated depths for the homogeneous and heterogeneous domains are 2666 ± 42 and 3527 ± 56 m below mean sea level, respectively (99% confidence). A Monte Carlo analysis indicates that uncertainties of the seismic tomography cannot account for this discrepancy at the 99% confidence level. Accounting for the spatial distribution of elastic properties according to seismic tomography significantly improves the fit of the deformation model predictions and significantly influences estimates for parameters that describe the location of a pressurized magma chamber.
Helton, J.C.; Johnson, J.D.; Rollstin, J.A.; Shiver, A.W.; Sprung, J.L.
1995-01-01
Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis are used in an investigation with the MACCS model of the chronic exposure pathways associated with a severe accident at a nuclear power station. The primary purpose of this study is to provide guidance on the variables to be considered in future review work to reduce the uncertainty in the important variables used in the calculation of reactor accident consequences. The effects of 75 imprecisely known input variables on the following reactor accident consequences are studied: crop growing season dose, crop long-term dose, water ingestion dose, milk growing season dose, long-term groundshine dose, long-term inhalation dose, total food pathways dose, total ingestion pathways dose, total long-term pathways dose, total latent cancer fatalities, area-dependent cost, crop disposal cost, milk disposal cost, population-dependent cost, total economic cost, condemnation area, condemnation population, crop disposal area and milk disposal area. When the predicted variables are considered collectively, the following input variables were found to be the dominant contributors to uncertainty: dry deposition velocity, transfer of cesium from animal feed to milk, transfer of cesium from animal feed to meat, ground concentration of Cs-134 at which the disposal of milk products will be initiated, transfer of Sr-90 from soil to legumes, maximum allowable ground concentration of Sr-90 for production of crops, fraction of cesium entering surface water that is consumed in drinking water, groundshine shielding factor, scale factor defining resuspension, dose reduction associated with decontamination, and ground concentration of 1-131 at which disposal of crops will be initiated due to accidents that occur during the growing season.
A comparison of five forest interception models using global sensitivity and uncertainty analysis
NASA Astrophysics Data System (ADS)
Linhoss, Anna C.; Siegert, Courtney M.
2016-07-01
Interception by the forest canopy plays a critical role in the hydrologic cycle by removing a significant portion of incoming precipitation from the terrestrial component. While there are a number of existing physical models of forest interception, few studies have summarized or compared these models. The objective of this work is to use global sensitivity and uncertainty analysis to compare five mechanistic interception models including the Rutter, Rutter Sparse, Gash, Sparse Gash, and Liu models. Using parameter probability distribution functions of values from the literature, our results show that on average storm duration [Dur], gross precipitation [PG], canopy storage [S] and solar radiation [Rn] are the most important model parameters. On the other hand, empirical parameters used in calculating evaporation and drip (i.e. trunk evaporation as a proportion of evaporation from the saturated canopy [ɛ], the empirical drainage parameter [b], the drainage partitioning coefficient [pd], and the rate of water dripping from the canopy when canopy storage has been reached [Ds]) have relatively low levels of importance in interception modeling. As such, future modeling efforts should aim to decompose parameters that are the most influential in determining model outputs into easily measurable physical components. Because this study compares models, the choices regarding the parameter probability distribution functions are applied across models, which enables a more definitive ranking of model uncertainty.
NASA Astrophysics Data System (ADS)
Brandon, S. T.; Domyancic, D. M.; Johnson, B. J.; Nimmakayala, R.; Lucas, D. D.; Tannahill, J.; Christianson, G.; McEnerney, J.; Klein, R.
2011-12-01
A Lawrence Livermore National Laboratory (LLNL) multi-directorate strategic initiative is developing uncertainty quantification (UQ) tools and techniques that are being applied to climate research. The LLNL UQ Pipeline and corresponding computational tools support the ensemble-of-models approach to UQ, and these tools have enabled the production of a comprehensive set of present-day climate calculations using the Community Atmospheric Model (CAM) and, more recently, the Community Earth System Model (CESM) codes. Statistical analysis of the ensemble is made possible by fitting a response surface, or surrogate model, to the ensemble-of-models data. We describe the LLNL UQ Pipeline and techniques that enable the execution and analysis of climate UQ and sensitivities studies on LLNL's high performance computing (HPC) resources. The analysis techniques are applied to an ensemble consisting of 1,000 CAM4 simulations. We also present two methods, direct sampling and bootstrapping, that quantify the errors in the ability of the response function to model the CAM4 ensemble. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013.
Babendreier, Justin E.; Castleton, Karl J.
2005-08-01
Elucidating uncertainty and sensitivity structures in environmental models can be a difficult task, even for low-order, single-medium constructs driven by a unique set of site-specific data. Quantitative assessment of integrated, multimedia models that simulate hundreds of sites, spanning multiple geographical and ecological regions, will ultimately require a comparative approach using several techniques, coupled with sufficient computational power. The Framework for Risk Analysis in Multimedia Environmental Systems - Multimedia, Multipathway, and Multireceptor Risk Assessment (FRAMES-3MRA) is an important software model being developed by the United States Environmental Protection Agency for use in risk assessment of hazardous waste management facilities. The 3MRA modeling system includes a set of 17 science modules that collectively simulate release, fate and transport, exposure, and risk associated with hazardous contaminants disposed of in land-based waste management units (WMU) .
Perspectives Gained in an Evaluation of Uncertainty, Sensitivity, and Decision Analysis Software
Davis, F.J.; Helton, J.C.
1999-02-24
The following software packages for uncertainty, sensitivity, and decision analysis were reviewed and also tested with several simple analysis problems: Crystal Ball, RiskQ, SUSA-PC, Analytica, PRISM, Ithink, Stella, LHS, STEPWISE, and JMP. Results from the review and test problems are presented. The study resulted in the recognition of the importance of four considerations in the selection of a software package: (1) the availability of an appropriate selection of distributions, (2) the ease with which data flows through the input sampling, model evaluation, and output analysis process, (3) the type of models that can be incorporated into the analysis process, and (4) the level of confidence in the software modeling and results.
Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao
2015-01-01
There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants. PMID:25459861
Sensitivity and uncertainty analysis of a physically-based landslide model
NASA Astrophysics Data System (ADS)
Yatheendradas, S.; Bach Kirschbaum, D.; Baum, R. L.; Godt, J.
2013-12-01
Worldwide, rainfall-induced landslides pose a major threat to life and property. Remotely sensed data combined with physically-based models of landslide initiation are a potentially economical solution for anticipating landslide activity over large, national or multinational areas as a basis for landslide early warning. Detailed high-resolution landslide modeling is challenging due to difficulties in quantifying the complex interaction between rainfall infiltration, surface materials and the typically coarse resolution of available remotely sensed data. These slope-stability models calculate coincident changes in driving and resisting forces at the hillslope level for anticipating landslides. This research seeks to better quantify the uncertainty of these models as well as evaluate their potential for application over large areas through detailed sensitivity analyses. Sensitivity to various factors including model input parameters, boundary and initial conditions, rainfall inputs, and spatial resolution of model inputs is assessed using a probabilistic ensemble setup. We use the physically-based USGS model, TRIGRS (Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability), that has been ported to NASA's high performance Land Information System (LIS) to take advantage of its multiple remote sensing data streams and tools. We apply the TRIGRS model over an example region with available in-situ gage and remotely sensed rainfall (e.g., TRMM: http://pmm.nasa.gov). To make this model applicable even in regions without relevant fine-resolution data, soil depth is estimated using topographic information, and initial water table depth using spatially disaggregated coarse-resolution modeled soil moisture data. The analyses are done across a range of fine spatial resolutions to determine the corresponding trend in the contribution of different factors to the model output uncertainty. This research acts as a guide towards application of such a detailed slope
NASA Astrophysics Data System (ADS)
Pianosi, Francesca; Wagener, Thorsten
2016-04-01
Simulations from environmental models are affected by potentially large uncertainties stemming from various sources, including model parameters and observational uncertainty in the input/output data. Understanding the relative importance of such sources of uncertainty is essential to support model calibration, validation and diagnostic evaluation, and to prioritize efforts for uncertainty reduction. Global Sensitivity Analysis (GSA) provides the theoretical framework and the numerical tools to gain this understanding. However, in traditional applications of GSA, model outputs are an aggregation of the full set of simulated variables. This aggregation of propagated uncertainties prior to GSA may lead to a significant loss of information and may cover up local behaviour that could be of great interest. In this work, we propose a time-varying version of a recently developed density-based GSA method, called PAWN, as a viable option to reduce this loss of information. We apply our approach to a medium-complexity hydrological model in order to address two questions: [1] Can we distinguish between the relative importance of parameter uncertainty versus data uncertainty in time? [2] Do these influences change in catchments with different characteristics? The results present the first quantitative investigation on the relative importance of parameter and data uncertainty across time. They also provide a demonstration of the value of time-varying GSA to investigate the propagation of uncertainty through numerical models and therefore guide additional data collection needs and model calibration/assessment.
Chapman, H D; Jeffers, T K
2015-05-01
Five successive flocks of broilers were reared in floor-pens and given different drug programs or were vaccinated against coccidiosis. Oocysts of Eimeria were isolated from the litter of pens during the fifth flock and their sensitivity to salinomycin (Sal) investigated by measuring new oocyst production following infection of medicated and unmedicated birds. Parasites obtained following 5 flocks given Sal were not well-controlled and it was concluded that they were partially resistant to the drug. Parasites obtained following 4 unmedicated flocks and one medicated flock were better controlled by Sal and it was concluded that in the absence of continuous medication there had been an improvement in drug efficacy. Sal almost completely suppressed oocyst production of isolates from treatments in which medication was followed by vaccination, indicating that when a drug program is followed by vaccination, restoration of sensitivity to Sal had occurred. PMID:25796273
Landry, Guillaume; Reniers, Brigitte; Murrer, Lars; Lutgens, Ludy; Bloemen-Van Gurp, Esther; Pignol, Jean-Philippe; Keller, Brian; Beaulieu, Luc; Verhaegen, Frank
2010-10-15
Purpose: The objective of this work is to assess the sensitivity of Monte Carlo (MC) dose calculations to uncertainties in human tissue composition for a range of low photon energy brachytherapy sources: {sup 125}I, {sup 103}Pd, {sup 131}Cs, and an electronic brachytherapy source (EBS). The low energy photons emitted by these sources make the dosimetry sensitive to variations in tissue atomic number due to the dominance of the photoelectric effect. This work reports dose to a small mass of water in medium D{sub w,m} as opposed to dose to a small mass of medium in medium D{sub m,m}. Methods: Mean adipose, mammary gland, and breast tissues (as uniform mixture of the aforementioned tissues) are investigated as well as compositions corresponding to one standard deviation from the mean. Prostate mean compositions from three different literature sources are also investigated. Three sets of MC simulations are performed with the GEANT4 code: (1) Dose calculations for idealized TG-43-like spherical geometries using point sources. Radial dose profiles obtained in different media are compared to assess the influence of compositional uncertainties. (2) Dose calculations for four clinical prostate LDR brachytherapy permanent seed implants using {sup 125}I seeds (Model 2301, Best Medical, Springfield, VA). The effect of varying the prostate composition in the planning target volume (PTV) is investigated by comparing PTV D{sub 90} values. (3) Dose calculations for four clinical breast LDR brachytherapy permanent seed implants using {sup 103}Pd seeds (Model 2335, Best Medical). The effects of varying the adipose/gland ratio in the PTV and of varying the elemental composition of adipose and gland within one standard deviation of the assumed mean composition are investigated by comparing PTV D{sub 90} values. For (2) and (3), the influence of using the mass density from CT scans instead of unit mass density is also assessed. Results: Results from simulation (1) show that variations
Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.
2015-01-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust
Zajac, Zuzanna; Stith, Bradley; Bowling, Andrea C; Langtimm, Catherine A; Swain, Eric D
2015-01-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust
Zajac, Zuzanna; Stith, Bradley; Bowling, Andrea C; Langtimm, Catherine A; Swain, Eric D
2015-07-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust
NASA Astrophysics Data System (ADS)
Shinohara, M.; Yamada, T.; Kanazawa, T.
2005-12-01
To understand characteristics of large earthquakes occurred in a subduction zone, it is necessary to study an asperity where large earthquakes occur repeatedly. Because observation near an asperity is needed for studies of asperities, ocean bottom seismometer (OBS) is essential to observe seismic waves from earthquakes in subduction area. Since a conventional OBS is designed for high-sensitivity observation, OBS records of large earthquake occurred near OBS are often saturated. To record large amplitude seismic waves, a servo-type accelerometer is suitable. However it was difficult for OBS to use an accelerometer due to large electric power consumption. Recently a servo-type accelerometer with a large dynamic range and low-power consumption is being developed. In addition, a pressure vessel of OBS can contain much more batteries by using a large size titanium sphere. For the long-term sea floor observation of aftershock of the 2004 Sumatra-Andaman earthquake, we installed a small three-component accelerometer in a conventional long-term OBS and obtained both high-sensitivity seismogram and low-sensitivity (strong motion) accelerograms on the sea floor. We used a compact three-component servo-type accelerometer whose weight is 85 grams as a seismic sensor. Measurement range and resolution of the sensor are 3 G and 10-5 G. The sensor was directly attached to the inside of the pressure vessel. Signals from the accelerometer were digitally recorded to Compact Flash memory with 16 bit resolution and a sampling frequency of 100 Hz. The OBS with the accelerometer was deployed on February 24, 2005 in a southern part of the source region of the 2004 Sumatra-Andaman earthquake by R/V Natsushima belonging to JAMSTEC, and recovered on August 3 by R/V Baruna Jaya I belonging to BPPT, Indonesia. The accelerograms were obtained from the deployment to April 13 when the CF memory became full. Although there are some small troubles for the recording, we could obtain low-sensitivity
NASA Technical Reports Server (NTRS)
Stolarski, R. S.; Douglass, A. R.
1986-01-01
Models of stratospheric photochemistry are generally tested by comparing their predictions for the composition of the present atmosphere with measurements of species concentrations. These models are then used to make predictions of the atmospheric sensitivity to perturbations. Here the problem of the sensitivity of such a model to chlorine perturbations ranging from the present influx of chlorine-containing compounds to several times that influx is addressed. The effects of uncertainties in input parameters, including reaction rate coefficients, cross sections, solar fluxes, and boundary conditions, are evaluated using a Monte Carlo method in which the values of the input parameters are randomly selected. The results are probability distributions for present atmosheric concentrations and for calculated perturbations due to chlorine from fluorocarbons. For more than 300 Monte Carlo runs the calculated ozone perturbation for continued emission of fluorocarbons at today's rates had a mean value of -6.2 percent, with a 1-sigma width of 5.5 percent. Using the same runs but only allowing the cases in which the calculated present atmosphere values of NO, NO2, and ClO at 25 km altitude fell within the range of measurements yielded a mean ozone depletion of -3 percent, with a 1-sigma deviation of 2.2 percent. The model showed a nonlinear behavior as a function of added fluorocarbons. The mean of the Monte Carlo runs was less nonlinear than the model run using mean value of the input parameters.
NASA Astrophysics Data System (ADS)
Kasprzyk, J. R.; Reed, P. M.; Kirsch, B. R.; Characklis, G. W.
2009-12-01
Risk-based water supply management presents severe cognitive, computational, and social challenges to planning in a changing world. Decision aiding frameworks must confront the cognitive biases implicit to risk, the severe uncertainties associated with long term planning horizons, and the consequent ambiguities that shape how we define and solve water resources planning and management problems. This paper proposes and demonstrates a new interactive framework for sensitivity informed de novo programming. The theoretical focus of our many-objective de novo programming is to promote learning and evolving problem formulations to enhance risk-based decision making. We have demonstrated our proposed de novo programming framework using a case study for a single city’s water supply in the Lower Rio Grande Valley (LRGV) in Texas. Key decisions in this case study include the purchase of permanent rights to reservoir inflows and anticipatory thresholds for acquiring transfers of water through optioning and spot leases. A 10-year Monte Carlo simulation driven by historical data is used to provide performance metrics for the supply portfolios. The three major components of our methodology include Sobol globoal sensitivity analysis, many-objective evolutionary optimization and interactive tradeoff visualization. The interplay between these components allows us to evaluate alternative design metrics, their decision variable controls and the consequent system vulnerabilities. Our LRGV case study measures water supply portfolios’ efficiency, reliability, and utilization of transfers in the water supply market. The sensitivity analysis is used interactively over interannual, annual, and monthly time scales to indicate how the problem controls change as a function of the timescale of interest. These results have been used then to improve our exploration and understanding of LRGV costs, vulnerabilities, and the water portfolios’ critical reliability constraints. These results
This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...
A sensitivity study of s-process: the impact of uncertainties from nuclear reaction rates
NASA Astrophysics Data System (ADS)
Vinyoles, N.; Serenelli, A.
2016-01-01
The slow neutron capture process (s-process) is responsible for the production of about half the elements beyond the Fe-peak. The production sites and the conditions under which the different components of s-process occur are relatively well established. A detailed quantitative understanding of s-process nucleosynthesis may yield light in physical processes, e.g. convection and mixing, taking place in the production sites. For this, it is important that the impact of uncertainties in the nuclear physics is well understood. In this work we perform a study of the sensitivity of s-process nucleosynthesis, with particular emphasis in the main component, on the nuclear reaction rates. Our aims are: to quantify the current uncertainties in the production factors of s-process elements originating from nuclear physics and, to identify key nuclear reactions that require more precise experimental determinations. In this work we studied two different production sites in which s-process occurs with very different neutron exposures: 1) a low-mass extremely metal-poor star during the He-core flash (nn reaching up to values of ∼ 1014cm-3); 2) the TP-AGB phase of a M⊙, Z=0.01 model, the typical site of the main s-process component (nn up to 108 — 109cm-3). In the first case, the main variation in the production of s-process elements comes from the neutron poisons and with relative variations around 30%-50%. In the second, the neutron poison are not as important because of the higher metallicity of the star that actually acts as a seed and therefore, the final error of the abundances are much lower around 10%-25%.
Martelli, Saulo; Valente, Giordano; Viceconti, Marco; Taddei, Fulvia
2015-01-01
Subject-specific musculoskeletal models have become key tools in the clinical decision-making process. However, the sensitivity of the calculated solution to the unavoidable errors committed while deriving the model parameters from the available information is not fully understood. The aim of this study was to calculate the sensitivity of all the kinematics and kinetics variables to the inter-examiner uncertainty in the identification of the lower limb joint models. The study was based on the computer tomography of the entire lower-limb from a single donor and the motion capture from a body-matched volunteer. The hip, the knee and the ankle joint models were defined following the International Society of Biomechanics recommendations. Using a software interface, five expert anatomists identified on the donor's images the necessary bony locations five times with a three-day time interval. A detailed subject-specific musculoskeletal model was taken from an earlier study, and re-formulated to define the joint axes by inputting the necessary bony locations. Gait simulations were run using OpenSim within a Monte Carlo stochastic scheme, where the locations of the bony landmarks were varied randomly according to the estimated distributions. Trends for the joint angles, moments, and the muscle and joint forces did not substantially change after parameter perturbations. The highest variations were as follows: (a) 11° calculated for the hip rotation angle, (b) 1% BW × H calculated for the knee moment and (c) 0.33 BW calculated for the ankle plantarflexor muscles and the ankle joint forces. In conclusion, the identification of the joint axes from clinical images is a robust procedure for human movement modelling and simulation. PMID:24963785
Uncertainty, sensitivity analysis and the role of data based mechanistic modeling in hydrology
NASA Astrophysics Data System (ADS)
Ratto, M.; Young, P. C.; Romanowicz, R.; Pappenberger, F.; Saltelli, A.; Pagano, A.
2007-05-01
In this paper, we discuss a joint approach to calibration and uncertainty estimation for hydrologic systems that combines a top-down, data-based mechanistic (DBM) modelling methodology; and a bottom-up, reductionist modelling methodology. The combined approach is applied to the modelling of the River Hodder catchment in North-West England. The top-down DBM model provides a well identified, statistically sound yet physically meaningful description of the rainfall-flow data, revealing important characteristics of the catchment-scale response, such as the nature of the effective rainfall nonlinearity and the partitioning of the effective rainfall into different flow pathways. These characteristics are defined inductively from the data without prior assumptions about the model structure, other than it is within the generic class of nonlinear differential-delay equations. The bottom-up modelling is developed using the TOPMODEL, whose structure is assumed a priori and is evaluated by global sensitivity analysis (GSA) in order to specify the most sensitive and important parameters. The subsequent exercises in calibration and validation, performed with Generalized Likelihood Uncertainty Estimation (GLUE), are carried out in the light of the GSA and DBM analyses. This allows for the pre-calibration of the the priors used for GLUE, in order to eliminate dynamical features of the TOPMODEL that have little effect on the model output and would be rejected at the structure identification phase of the DBM modelling analysis. In this way, the elements of meaningful subjectivity in the GLUE approach, which allow the modeler to interact in the modelling process by constraining the model to have a specific form prior to calibration, are combined with other more objective, data-based benchmarks for the final uncertainty estimation. GSA plays a major role in building a bridge between the hypothetico-deductive (bottom-up) and inductive (top-down) approaches and helps to improve the
Uncertainty and sensitivity in optode-based shelf-sea net community production estimates
NASA Astrophysics Data System (ADS)
Hull, Tom; Greenwood, Naomi; Kaiser, Jan; Johnson, Martin
2016-02-01
Coastal seas represent one of the most valuable and vulnerable habitats on Earth. Understanding biological productivity in these dynamic regions is vital to understanding how they may influence and be affected by climate change. A key metric to this end is net community production (NCP), the net effect of autotrophy and heterotrophy; however accurate estimation of NCP has proved to be a difficult task. Presented here is a thorough exploration and sensitivity analysis of an oxygen mass-balance-based NCP estimation technique applied to the Warp Anchorage monitoring station, which is a permanently well-mixed shallow area within the River Thames plume. We have developed an open-source software package for calculating NCP estimates and air-sea gas flux. Our study site is identified as a region of net heterotrophy with strong seasonal variability. The annual cumulative net community oxygen production is calculated as (-5 ± 2.5) mol m-2 a-1. Short-term daily variability in oxygen is demonstrated to make accurate individual daily estimates challenging. The effects of bubble-induced supersaturation is shown to have a large influence on cumulative annual estimates and is the source of much uncertainty.
Uncertainty and sensitivity in optode-based shelf-sea net community production estimates
NASA Astrophysics Data System (ADS)
Hull, T.; Greenwood, N.; Kaiser, J.; Johnson, M.
2015-09-01
Coastal seas represent one of the most valuable and vulnerable habitats on Earth. Understanding biological productivity in these dynamic regions is vital to understanding how they may influence and be affected by climate change. A key metric to this end is net community production (NCP), the net effect of autotrophy and hetrotrophy, however accurate estimation of NCP has proved to be a difficult task. Presented here is a thorough exploration and sensitivity analysis of an oxygen mass-balance based NCP estimation technique applied to the Warp Anchorage monitoring station which is a permanently well mixed shallow area within the Thames river plume. We have developed an open source software package for calculating NCP estimates and air-sea gas flux. Our study site is identified as a region of net heteotrophy with strong seasonal variability. The annual cumulative net community oxygen production is calculated as (-5 ± 2.5) mol m-2 a-1. Short term daily variability in oxygen is demonstrated to make accurate individual daily estimates challenging. The effects of bubble induced supersaturation is shown to have a large influence on cumulative annual estimates, and is the source of much uncertainty.
Regional sensitivity analysis of aleatory and epistemic uncertainties on failure probability
NASA Astrophysics Data System (ADS)
Li, Guijie; Lu, Zhenzhou; Lu, Zhaoyan; Xu, Jia
2014-06-01
To analyze the effects of specific regions of the aleatory and epistemic uncertain variables on the failure probability, a regional sensitivity analysis (RSA) technique called contribution to failure probability (CFP) plot is developed in this paper. This RSA technique can detect the important aleatory and epistemic uncertain variables, and also measure the contribution of specific regions of these important input variables to failure probability. When computing the proposed CFP, the aleatory and epistemic uncertain variables are modeled by random and interval variables, respectively. Then based on the hybrid probabilistic and interval model (HPIM) and the basic probability assignments in evidence theory, the failure probability of the structure with aleatory and epistemic uncertainties can be obtained through a successive construction of the second-level limit state function and the corresponding reliability analysis. Kriging method is used to establish the surrogate model of the second-level limit state function to improve the computational efficiency. Two practical examples are employed to test the effectiveness of the proposed RSA technique, and the efficiency and accuracy of the established kriging-based solution.
Sensitivity of power functions to aggregation: Bias and uncertainty in radar rainfall retrieval
NASA Astrophysics Data System (ADS)
Sassi, M. G.; Leijnse, H.; Uijlenhoet, R.
2014-10-01
Rainfall retrieval using weather radar relies on power functions between radar reflectivity Z and rain rate R. The nonlinear nature of these relations complicates the comparison of rainfall estimates employing reflectivities measured at different scales. Transforming Z into R using relations that have been derived for other scales results in a bias and added uncertainty. We investigate the sensitivity of Z-R relations to spatial and temporal aggregation using high-resolution reflectivity fields for five rainfall events. Existing Z-R relations were employed to investigate the behavior of aggregated Z-R relations with scale, the aggregation bias, and the variability of the estimated rain rate. The prefactor and the exponent of aggregated Z-R relations systematically diverge with scale, showing a break that is event-dependent in the temporal domain and nearly constant in space. The systematic error associated with the aggregation bias at a given scale can become of the same order as the corresponding random error associated with intermittent sampling. The bias can be constrained by including information about the variability of Z within a certain scale of aggregation, and is largely captured by simple functions of the coefficient of variation of Z. Several descriptors of spatial and temporal variability of the reflectivity field are presented, to establish the links between variability descriptors and resulting aggregation bias. Prefactors in Z-R relations can be related to multifractal properties of the rainfall field. We find evidence of scaling breaks in the structural analysis of spatial rainfall with aggregation.
Technology Transfer Automated Retrieval System (TEKTRAN)
This paper provides an overview of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) software application, an open-source, Java-based toolbox of visual and numerical analysis components for the evaluation of environmental models. MOUSE is based on the OPTAS model calibration syst...
Uncertainty, sensitivity analysis and the role of data based mechanistic modeling in hydrology
NASA Astrophysics Data System (ADS)
Ratto, M.; Young, P. C.; Romanowicz, R.; Pappenberge, F.; Saltelli, A.; Pagano, A.
2006-09-01
In this paper, we discuss the problem of calibration and uncertainty estimation for hydrologic systems from two points of view: a bottom-up, reductionist approach; and a top-down, data-based mechanistic (DBM) approach. The two approaches are applied to the modelling of the River Hodder catchment in North-West England. The bottom-up approach is developed using the TOPMODEL, whose structure is evaluated by global sensitivity analysis (GSA) in order to specify the most sensitive and important parameters; and the subsequent exercises in calibration and validation are carried out in the light of this sensitivity analysis. GSA helps to improve the calibration of hydrological models, making their properties more transparent and highlighting mis-specification problems. The DBM model provides a quick and efficient analysis of the rainfall-flow data, revealing important characteristics of the catchment-scale response, such as the nature of the effective rainfall nonlinearity and the partitioning of the effective rainfall into different flow pathways. TOPMODEL calibration takes more time and it explains the flow data a little less well than the DBM model. The main differences in the modelling results are in the nature of the models and the flow decomposition they suggest. The "quick'' (63%) and "slow'' (37%) components of the decomposed flow identified in the DBM model show a clear partitioning of the flow, with the quick component apparently accounting for the effects of surface and near surface processes; and the slow component arising from the displacement of groundwater into the river channel (base flow). On the other hand, the two output flow components in TOPMODEL have a different physical interpretation, with a single flow component (95%) accounting for both slow (subsurface) and fast (surface) dynamics, while the other, very small component (5%) is interpreted as an instantaneous surface runoff generated by rainfall falling on areas of saturated soil. The results of
Kamp, F.; Brueningk, S.C.; Wilkens, J.J.
2014-06-15
Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g. RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment
Gerhard Strydom
2011-01-01
The need for a defendable and systematic uncertainty and sensitivity approach that conforms to the Code Scaling, Applicability, and Uncertainty (CSAU) process, and that could be used for a wide variety of software codes, was defined in 2008. The GRS (Gesellschaft für Anlagen und Reaktorsicherheit) company of Germany has developed one type of CSAU approach that is particularly well suited for legacy coupled core analysis codes, and a trial version of their commercial software product SUSA (Software for Uncertainty and Sensitivity Analyses) was acquired on May 12, 2010. This report summarized the results of the initial investigations performed with SUSA, utilizing a typical High Temperature Reactor benchmark (the IAEA CRP-5 PBMR 400MW Exercise 2) and the PEBBED-THERMIX suite of codes. The following steps were performed as part of the uncertainty and sensitivity analysis: 1. Eight PEBBED-THERMIX model input parameters were selected for inclusion in the uncertainty study: the total reactor power, inlet gas temperature, decay heat, and the specific heat capability and thermal conductivity of the fuel, pebble bed and reflector graphite. 2. The input parameters variations and probability density functions were specified, and a total of 800 PEBBED-THERMIX model calculations were performed, divided into 4 sets of 100 and 2 sets of 200 Steady State and Depressurized Loss of Forced Cooling (DLOFC) transient calculations each. 3. The steady state and DLOFC maximum fuel temperature, as well as the daily pebble fuel load rate data, were supplied to SUSA as model output parameters of interest. The 6 data sets were statistically analyzed to determine the 5% and 95% percentile values for each of the 3 output parameters with a 95% confidence level, and typical statistical indictors were also generated (e.g. Kendall, Pearson and Spearman coefficients). 4. A SUSA sensitivity study was performed to obtain correlation data between the input and output parameters, and to identify the
Ivanova, T.; Laville, C.; Dyrda, J.; Mennerdahl, D.; Golovko, Y.; Raskach, K.; Tsiboulia, A.; Lee, G. S.; Woo, S. W.; Bidaud, A.; Sabouri, P.; Bledsoe, K.; Rearden, B.; Gulliford, J.; Michel-Sendis, F.
2012-07-01
The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)
NASA Astrophysics Data System (ADS)
Ploquin, Nicolas; Song, William; Lau, Harold; Dunscombe, Peter
2005-08-01
The goal of this study was to assess the impact of set-up uncertainty on compliance with the objectives and constraints of an intensity modulated radiation therapy protocol for early stage cancer of the oropharynx. As the convolution approach to the quantitative study of set-up uncertainties cannot accommodate either surface contours or internal inhomogeneities, both of which are highly relevant to sites in the head and neck, we have employed the more resource intensive direct simulation method. The impact of both systematic (variable from 0 to 6 mm) and random (fixed at 2 mm) set-up uncertainties on compliance with the criteria of the RTOG H-0022 protocol has been examined for eight geometrically complex structures: CTV66 (gross tumour volume and palpable lymph nodes suspicious for metastases), CTV54 (lymph node groups or surgical neck levels at risk of subclinical metastases), glottic larynx, spinal cord, brainstem, mandible and left and right parotids. In a probability-based approach, both dose-volume histograms and equivalent uniform doses were used to describe the dose distributions achieved by plans for two patients, in the presence of set-up uncertainty. The equivalent uniform dose is defined to be that dose which, when delivered uniformly to the organ of interest, will lead to the same response as the non-uniform dose under consideration. For systematic set-up uncertainties greater than 2 mm and 5 mm respectively, coverage of the CTV66 and CTV54 could be significantly compromised. Directional sensitivity was observed in both cases. Most organs at risk (except the glottic larynx which did not comply under static conditions) continued to meet the dose constraints up to 4 mm systematic uncertainty for both plans. The exception was the contra lateral parotid gland, which this protocol is specifically designed to protect. Sensitivity to systematic set-up uncertainty of 2 mm was observed for this organ at risk in both clinical plans.
Davidson, J.W.; Dudziak, D.J.; Pelloni, S.; Stepanek, J.
1988-01-01
In a recent common Los Alamos/PSI effort, a sensitivity and nuclear data uncertainty path for the modular code system AARE (Advanced Analysis for Reactor Engineering) was developed. This path includes the cross-section code TRAMIX, the one-dimensional finite difference S/sub N/-transport code ONEDANT, the two-dimensional finite element S/sub N/-transport code TRISM, and the one- and two-dimensional sensitivity and nuclear data uncertainty code SENSIBL. Within the framework of the present work a complete set of forward and adjoint two-dimensional TRISM calculations were performed both for the bare, as well as for the Pb- and Be-preceeded, LBM using MATXS8 libraries. Then a two-dimensional sensitivity and uncertainty analysis for all cases was performed. The goal of this analysis was the determination of the uncertainties of a calculated tritium production per source neutron from lithium along the central Li/sub 2/O rod in the LBM. Considered were the contributions from /sup 1/H, /sup 6/Li, /sup 7/Li, /sup 9/Be, /sup nat/C, /sup 14/N, /sup 16/O, /sup 23/Na, /sup 27/Al, /sup nat/Si, /sup nat/Cr, /sup nat/Fe, /sup nat/Ni, and /sup nat/Pb. 22 refs., 1 fig., 3 tabs.
NASA Astrophysics Data System (ADS)
Smith, Michael S.; Hix, W. Raphael; Parete-Koon, Suzanne; Dessieux, Luc; Ma, Zhanwen; Starrfield, Sumner; Bardayan, Daniel W.; Guidry, Michael W.; Smith, Donald L.; Blackmon, Jeffery C.; Mezzacappa, Anthony
2004-12-01
We utilize multiple-zone, post-processing element synthesis calculations to determine the impact of recent ORNL radioactive ion beam measurements on predictions of novae and X-ray burst simulations. We also assess the correlations between all relevant reaction rates and all synthesized isotopes, and translate nuclear reaction rate uncertainties into abundance prediction uncertainties, via a unique Monte Carlo technique.
Sensitivity and uncertainty analysis for a field-scale P loss model
Technology Transfer Automated Retrieval System (TEKTRAN)
Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that there are inherent uncertainties with model predictions, limited studies have addressed model prediction uncertainty. In this study we assess the effect of model input error on predict...
Sensitivity and uncertainty analysis for the annual P loss estimator (APLE) model
Technology Transfer Automated Retrieval System (TEKTRAN)
Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that there are inherent uncertainties with model predictions, limited studies have addressed model prediction uncertainty. In this study we assess the effect of model input error on predict...
Floors: Selection and Maintenance.
ERIC Educational Resources Information Center
Berkeley, Bernard
Flooring for institutional, commercial, and industrial use is described with regard to its selection, care, and maintenance. The following flooring and subflooring material categories are discussed--(1) resilient floor coverings, (2) carpeting, (3) masonry floors, (4) wood floors, and (5) "formed-in-place floors". The properties, problems,…
Gul, R; Bernhard, S
2015-11-01
In computational cardiovascular models, parameters are one of major sources of uncertainty, which make the models unreliable and less predictive. In order to achieve predictive models that allow the investigation of the cardiovascular diseases, sensitivity analysis (SA) can be used to quantify and reduce the uncertainty in outputs (pressure and flow) caused by input (electrical and structural) model parameters. In the current study, three variance based global sensitivity analysis (GSA) methods; Sobol, FAST and a sparse grid stochastic collocation technique based on the Smolyak algorithm were applied on a lumped parameter model of carotid bifurcation. Sensitivity analysis was carried out to identify and rank most sensitive parameters as well as to fix less sensitive parameters at their nominal values (factor fixing). In this context, network location and temporal dependent sensitivities were also discussed to identify optimal measurement locations in carotid bifurcation and optimal temporal regions for each parameter in the pressure and flow waves, respectively. Results show that, for both pressure and flow, flow resistance (R), diameter (d) and length of the vessel (l) are sensitive within right common carotid (RCC), right internal carotid (RIC) and right external carotid (REC) arteries, while compliance of the vessels (C) and blood inertia (L) are sensitive only at RCC. Moreover, Young's modulus (E) and wall thickness (h) exhibit less sensitivities on pressure and flow at all locations of carotid bifurcation. Results of network location and temporal variabilities revealed that most of sensitivity was found in common time regions i.e. early systole, peak systole and end systole. PMID:26367184
The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and
Parameter Estimation (UA/SA/PE API) (also known as Calibration, Optimization and Sensitivity and Uncertainty (CUSO)) was developed in a joint effort between several members of both ...
Bean, J.E.; Berglund, J.W.; Davis, F.J.; Economy, K.; Garner, J.W.; Helton, J.C.; Johnson, J.D.; MacKinnon, R.J.; Miller, J.; O'Brien, D.G.; Ramsey, J.L.; Schreiber, J.D.; Shinta, A.; Smith, L.N.; Stockman, C.; Stoelzel, D.M.; Vaughn, P.
1998-09-01
The Waste Isolation Pilot Plant (WPP) is located in southeastern New Mexico and is being developed by the U.S. Department of Energy (DOE) for the geologic (deep underground) disposal of transuranic (TRU) waste. A detailed performance assessment (PA) for the WIPP was carried out in 1996 and supports an application by the DOE to the U.S. Environmental Protection Agency (EPA) for the certification of the WIPP for the disposal of TRU waste. The 1996 WIPP PA uses a computational structure that maintains a separation between stochastic (i.e., aleatory) and subjective (i.e., epistemic) uncertainty, with stochastic uncertainty arising from the many possible disruptions that could occur over the 10,000 yr regulatory period that applies to the WIPP and subjective uncertainty arising from the imprecision with which many of the quantities required in the PA are known. Important parts of this structure are (1) the use of Latin hypercube sampling to incorporate the effects of subjective uncertainty, (2) the use of Monte Carlo (i.e., random) sampling to incorporate the effects of stochastic uncertainty, and (3) the efficient use of the necessarily limited number of mechanistic calculations that can be performed to support the analysis. The use of Latin hypercube sampling generates a mapping from imprecisely known analysis inputs to analysis outcomes of interest that provides both a display of the uncertainty in analysis outcomes (i.e., uncertainty analysis) and a basis for investigating the effects of individual inputs on these outcomes (i.e., sensitivity analysis). The sensitivity analysis procedures used in the PA include examination of scatterplots, stepwise regression analysis, and partial correlation analysis. Uncertainty and sensitivity analysis results obtained as part of the 1996 WIPP PA are presented and discussed. Specific topics considered include two phase flow in the vicinity of the repository, radionuclide release from the repository, fluid flow and radionuclide
Understanding ozone response to its precursor emissions is crucial for effective air quality management practices. This nonlinear response is usually simulated using chemical transport models, and the modeling results are affected by uncertainties in emissions inputs. In this stu...
Pruet, J
2007-06-23
This report describes Kiwi, a program developed at Livermore to enable mature studies of the relation between imperfectly known nuclear physics and uncertainties in simulations of complicated systems. Kiwi includes a library of evaluated nuclear data uncertainties, tools for modifying data according to these uncertainties, and a simple interface for generating processed data used by transport codes. As well, Kiwi provides access to calculations of k eigenvalues for critical assemblies. This allows the user to check implications of data modifications against integral experiments for multiplying systems. Kiwi is written in python. The uncertainty library has the same format and directory structure as the native ENDL used at Livermore. Calculations for critical assemblies rely on deterministic and Monte Carlo codes developed by B division.
The response of the National Oceanic and Atmospheric Administration multilayer inferential dry deposition velocity model (NOAA-MLM) to error in meteorological inputs and model parameterization is reported. Monte Carlo simulations were performed to assess the uncertainty in NOA...
Propagation of uncertainty and sensitivity analysis in an integral oil-gas plume model
NASA Astrophysics Data System (ADS)
Wang, Shitao; Iskandarani, Mohamed; Srinivasan, Ashwanth; Thacker, W. Carlisle; Winokur, Justin; Knio, Omar M.
2016-05-01
Polynomial Chaos expansions are used to analyze uncertainties in an integral oil-gas plume model simulating the Deepwater Horizon oil spill. The study focuses on six uncertain input parameters—two entrainment parameters, the gas to oil ratio, two parameters associated with the droplet-size distribution, and the flow rate—that impact the model's estimates of the plume's trap and peel heights, and of its various gas fluxes. The ranges of the uncertain inputs were determined by experimental data. Ensemble calculations were performed to construct polynomial chaos-based surrogates that describe the variations in the outputs due to variations in the uncertain inputs. The surrogates were then used to estimate reliably the statistics of the model outputs, and to perform an analysis of variance. Two experiments were performed to study the impacts of high and low flow rate uncertainties. The analysis shows that in the former case the flow rate is the largest contributor to output uncertainties, whereas in the latter case, with the uncertainty range constrained by aposteriori analyses, the flow rate's contribution becomes negligible. The trap and peel heights uncertainties are then mainly due to uncertainties in the 95% percentile of the droplet size and in the entrainment parameters.
Rising, M.E.
2015-01-15
The prompt fission neutron spectrum (PFNS) uncertainties in the n+{sup 239}Pu fission reaction are used to study the impact on several fast critical assemblies modeled in the MCNP6.1 code. The newly developed sensitivity capability in MCNP6.1 is used to compute the k{sub eff} sensitivity coefficients with respect to the PFNS. In comparison, the covariance matrix given in the ENDF/B-VII.1 library is decomposed and randomly sampled realizations of the PFNS are propagated through the criticality calculation, preserving the PFNS covariance matrix. The information gathered from both approaches, including the overall k{sub eff} uncertainty, is statistically analyzed. Overall, the forward and backward approaches agree as expected. The results from a new method appear to be limited by the process used to evaluate the PFNS and is not necessarily a flaw of the method itself. Final thoughts and directions for future work are suggested.
NASA Astrophysics Data System (ADS)
He, R.; Pang, B.
2015-05-01
The increasing water problems and eco-environmental issues of Heihe River basin have attracted widespread attention. In this research, the VIC (Variable Infiltration Capacity) model was selected to simulate the water cycle of the upstream in Heihe River basin. The GLUE (Generalized Likelihood Uncertainty Estimation) method was used to study the sensitivity of the model parameters and the uncertainty of model outputs. The results showed that the Nash-Sutcliffe efficient coefficient was 0.62 in the calibration period and 0.64 in the validation period. Of the seven elected parameters, Dm (maximum baseflow that can occur from the third soil layer), Ws (fraction of the maximum soil moisture of the third soil layer where non-linear baseflow occurs), and d1 (soil depth of the first soil layer), were very sensitive, especially d1. Observed discharges were almost in the range of the 95% predicted confidence range.
Sensitivity of Polar Stratospheric Ozone Loss to Uncertainties in Chemical Reaction Kinetics
NASA Technical Reports Server (NTRS)
Kawa, S. Randolph; Stolarksi, Richard S.; Douglass, Anne R.; Newman, Paul A.
2008-01-01
Several recent observational and laboratory studies of processes involved in polar stratospheric ozone loss have prompted a reexamination of aspects of our understanding for this key indicator of global change. To a large extent, our confidence in understanding and projecting changes in polar and global ozone is based on our ability to simulate these processes in numerical models of chemistry and transport. The fidelity of the models is assessed in comparison with a wide range of observations. These models depend on laboratory-measured kinetic reaction rates and photolysis cross sections to simulate molecular interactions. A typical stratospheric chemistry mechanism has on the order of 50- 100 species undergoing over a hundred intermolecular reactions and several tens of photolysis reactions. The rates of all of these reactions are subject to uncertainty, some substantial. Given the complexity of the models, however, it is difficult to quantify uncertainties in many aspects of system. In this study we use a simple box-model scenario for Antarctic ozone to estimate the uncertainty in loss attributable to known reaction kinetic uncertainties. Following the method of earlier work, rates and uncertainties from the latest laboratory evaluations are applied in random combinations. We determine the key reactions and rates contributing the largest potential errors and compare the results to observations to evaluate which combinations are consistent with atmospheric data. Implications for our theoretical and practical understanding of polar ozone loss will be assessed.
James, Scott; Cohan, Alexander
2005-08-01
Given pre-existing Groundwater Modeling System (GMS) models of the Horonobe Underground Research Laboratory (URL) at both the regional and site scales, this work performs an example uncertainty analysis for performance assessment (PA) applications. After a general overview of uncertainty and sensitivity analysis techniques, the existing GMS site-scale model is converted to a PA model of the steady-state conditions expected after URL closure. This is done to examine the impact of uncertainty in site-specific data in conjunction with conceptual model uncertainty regarding the location of the Oomagari Fault. A heterogeneous stochastic model is developed and corresponding flow fields and particle tracks are calculated. In addition, a quantitative analysis of the ratio of dispersive to advective forces, the F-ratio, is performed for stochastic realizations of each conceptual model. Finally, a one-dimensional transport abstraction is modeled based on the particle path lengths and the materials through which each particle passes to yield breakthrough curves at the model boundary. All analyses indicate that accurate characterization of the Oomagari Fault with respect to both location and hydraulic conductivity is critical to PA calculations. This work defines and outlines typical uncertainty and sensitivity analysis procedures and demonstrates them with example PA calculations relevant to the Horonobe URL. Acknowledgement: This project was funded by Japan Nuclear Cycle Development Institute (JNC). This work was conducted jointly between Sandia National Laboratories (SNL) and JNC under a joint JNC/U.S. Department of Energy (DOE) work agreement. Performance assessment calculations were conducted and analyzed at SNL based on a preliminary model by Kashima, Quintessa, and JNC and include significant input from JNC to make sure the results are relevant for the Japanese nuclear waste program.
NASA Astrophysics Data System (ADS)
Pecknold, Sean; Osler, John C.
2012-02-01
Accurate sonar performance prediction modelling depends on a good knowledge of the local environment, including bathymetry, oceanography and seabed properties. The function of rapid environmental assessment (REA) is to obtain relevant environmental data in a tactically relevant time frame, with REA methods categorized by the nature and immediacy of their application, from historical databases through remotely sensed data to in situ acquisition. However, each REA approach is subject to its own set of uncertainties, which are in turn transferred to uncertainty in sonar performance prediction. An approach to quantify and manage this uncertainty has been developed through the definition of sensitivity metrics and Monte Carlo simulations of acoustic propagation using multiple realizations of the marine environment. This approach can be simplified by using a linearized two-point sensitivity measure based on the statistics of the environmental parameters used by acoustic propagation models. The statistical properties of the environmental parameters may be obtained from compilations of historical data, forecast conditions or in situ measurements. During a field trial off the coast of Nova Scotia, a set of environmental data, including oceanographic and geoacoustic parameters, were collected together with acoustic transmission loss data. At the same time, several numerical models to forecast the oceanographic conditions were run for the area, including 5- and 1-day forecasts as well as nowcasts. Data from the model runs are compared to each other and to in situ environmental sampling, and estimates of the environmental uncertainties are calculated. The forecast and in situ data are used with historical geoacoustic databases and geoacoustic parameters collected using REA techniques, respectively, to perform acoustic transmission loss predictions, which are then compared to measured transmission loss. The progression of uncertainties in the marine environment, within and
While there is a high potential for exposure of humans and ecosystems to chemicals released from hazardous waste sites, the degree to which this potential is realized is often uncertain. Conceptually divided among parameter, model, and modeler uncertainties imparted during simula...
Dowdell, S; Grassberger, C; Paganetti, H
2014-06-01
Purpose: Evaluate the sensitivity of intensity-modulated proton therapy (IMPT) lung treatments to systematic and random setup uncertainties combined with motion effects. Methods: Treatment plans with single-field homogeneity restricted to ±20% (IMPT-20%) were compared to plans with no restriction (IMPT-full). 4D Monte Carlo simulations were performed for 10 lung patients using the patient CT geometry with either ±5mm systematic or random setup uncertainties applied over a 35 × 2.5Gy(RBE) fractionated treatment course. Intra-fraction, inter-field and inter-fraction motions were investigated. 50 fractionated treatments with systematic or random setup uncertainties applied to each fraction were generated for both IMPT delivery methods and three energy-dependent spot sizes (big spots - BS σ=18-9mm, intermediate spots - IS σ=11-5mm, small spots - SS σ=4-2mm). These results were compared to a Monte Carlo recalculation of the original treatment plan, with results presented as the difference in EUD (ΔEUD), V{sub 95} (ΔV{sub 95}) and target homogeneity (ΔD{sub 1}–D{sub 99}) between the 4D simulations and the Monte Carlo calculation on the planning CT. Results: The standard deviations in the ΔEUD were 1.95±0.47(BS), 1.85±0.66(IS) and 1.31±0.35(SS) times higher in IMPT-full compared to IMPT-20% when ±5mm systematic setup uncertainties were applied. The ΔV{sub 95} variations were also 1.53±0.26(BS), 1.60±0.50(IS) and 1.38±0.38(SS) times higher for IMPT-full. For random setup uncertainties, the standard deviations of the ΔEUD from 50 simulated fractionated treatments were 1.94±0.90(BS), 2.13±1.08(IS) and 1.45±0.57(SS) times higher in IMPTfull compared to IMPT-20%. For all spot sizes considered, the ΔD{sub 1}-D{sub 99} coincided within the uncertainty limits for the two IMPT delivery methods, with the mean value always higher for IMPT-full. Statistical analysis showed significant differences between the IMPT-full and IMPT-20% dose distributions for the
Gureghian, A.B.; Sagar, B.
1993-12-31
This paper presents a method for sensitivity and uncertainty analyses of a hypothetical nuclear waste repository located in a layer and fractured unconfined aquifer. Groundwater travel time (GWTT) has been selected as the performance measure. The repository is located in the unsaturated zone, and the source of aquifer recharge is due solely to steady infiltration impinging uniformly over the surface area that is to be modeled. The equivalent porous media concept is adopted to model the fractured zone in the flow field. The evaluation of pathlines and travel time of water particles in the flow domain is performed based on a Lagrangian concept. The Bubnov-Galerkin finite-element method is employed to solve the primary flow problem (non-linear), the equation of motion, and the adjoint sensitivity equations. The matrix equations are solved with a Gaussian elimination technique using sparse matrix solvers. The sensitivity measure corresponds to the first derivative of the performance measure (GWTT) with respect to the parameters of the system. The uncertainty in the computed GWTT is quantified by using the first-order second-moment (FOSM) approach, a probabilistic method that relies on the mean and variance of the system parameters and the sensitivity of the performance measure with respect to these parameters. A test case corresponding to a layered and fractured, unconfined aquifer is then presented to illustrate the various features of the method.
Vesselinov, V. V.; Keating, E. H.; Zyvoloski, G. A.
2002-01-01
Predictions and their uncertainty are key aspects of any modeling effort. The prediction uncertainty can be significant when the predictions depend on uncertain system parameters. We analyze prediction uncertainties through constrained nonlinear second-order optimization of an inverse model. The optimized objective function is the weighted squared-difference between observed and simulated system quantities (flux and time-dependent head data). The constraints are defined by the maximization/minimization of the prediction within a given objective-function range. The method is applied in capture-zone analyses of groundwater-supply systems using a three-dimensional numerical model of the Espanola Basin aquifer. We use the finite-element simulator FEHM coupled with parameter-estimation/predictive-analysis code PEST. The model is run in parallel on a multi-processor supercomputer. We estimate sensitivity and uncertainty of model predictions such as capture-zone identification and travel times. While the methodology is extremely powerful, it is numerically intensive.
Eldred, Michael Scott; Vigil, Dena M.; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Lefantzi, Sophia; Hough, Patricia Diane; Eddy, John P.
2011-12-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.
Michael Pernice
2012-10-01
Grid-to-rod fretting is the leading cause of fuel failures in pressurized water reactors, and is one of the challenge problems being addressed by the Consortium for Advanced Simulation of Light Water Reactors to guide its efforts to develop a virtual reactor environment. Prior and current efforts in modeling and simulation of grid-to-rod fretting are discussed. Sources of uncertainty in grid-to-rod fretting are also described.
Sensitivity of Polar Stratospheric Ozone Loss to Uncertainties in Chemical Reaction Kinetics
NASA Technical Reports Server (NTRS)
Kawa, S. Randolph; Stolarski, Richard S.; Douglass, Anne R.; Newman, Paul A.
2008-01-01
Several recent observational and laboratory studies of processes involved in polar stratospheric ozone loss have prompted a reexamination of aspect of out understanding for this key indicator of global change. To a large extent, our confidence in understanding and projecting changes in polar and global ozone is based on our ability to to simulate these process in numerical models of chemistry and transport. These models depend on laboratory-measured kinetic reaction rates and photlysis cross section to simulate molecular interactions. In this study we use a simple box-model scenario for Antarctic ozone to estimate the uncertainty in loss attributable to known reaction kinetic uncertainties. Following the method of earlier work, rates and uncertainties from the latest laboratory evaluation are applied in random combinations. We determine the key reaction and rates contributing the largest potential errors and compare the results to observations to evaluate which combinations are consistent with atmospheric data. Implications for our theoretical and practical understanding of polar ozone loss will be assessed.
We present a multi-faceted sensitivity analysis of a spatially explicit, individual-based model (IBM) (HexSim) of a threatened species, the Northern Spotted Owl (Strix occidentalis caurina) on a national forest in Washington, USA. Few sensitivity analyses have been conducted on ...
Fieberg, J.; Jenkins, Kurt J.
2005-01-01
Often landmark conservation decisions are made despite an incomplete knowledge of system behavior and inexact predictions of how complex ecosystems will respond to management actions. For example, predicting the feasibility and likely effects of restoring top-level carnivores such as the gray wolf (Canis lupus) to North American wilderness areas is hampered by incomplete knowledge of the predator-prey system processes and properties. In such cases, global sensitivity measures, such as Sobola?? indices, allow one to quantify the effect of these uncertainties on model predictions. Sobola?? indices are calculated by decomposing the variance in model predictions (due to parameter uncertainty) into main effects of model parameters and their higher order interactions. Model parameters with large sensitivity indices can then be identified for further study in order to improve predictive capabilities. Here, we illustrate the use of Sobola?? sensitivity indices to examine the effect of parameter uncertainty on the predicted decline of elk (Cervus elaphus) population sizes following a hypothetical reintroduction of wolves to Olympic National Park, Washington, USA. The strength of density dependence acting on survival of adult elk and magnitude of predation were the most influential factors controlling elk population size following a simulated wolf reintroduction. In particular, the form of density dependence in natural survival rates and the per-capita predation rate together accounted for over 90% of variation in simulated elk population trends. Additional research on wolf predation rates on elk and natural compensations in prey populations is needed to reliably predict the outcome of predatora??prey system behavior following wolf reintroductions.
NASA Astrophysics Data System (ADS)
Ricciuto, D. M.; Mei, R.; Mao, J.; Hoffman, F. M.; Kumar, J.
2015-12-01
Uncertainties in land parameters could have important impacts on simulated water and energy fluxes and land surface states, which will consequently affect atmospheric and biogeochemical processes. Therefore, quantification of such parameter uncertainties using a land surface model is the first step towards better understanding of predictive uncertainty in Earth system models. In this study, we applied a random-sampling, high-dimensional model representation (RS-HDMR) method to analyze the sensitivity of simulated photosynthesis, surface energy fluxes and surface hydrological components to selected land parameters in version 4.5 of the Community Land Model (CLM4.5). Because of the large computational expense of conducting ensembles of global gridded model simulations, we used the results of a previous cluster analysis to select one thousand representative land grid cells for simulation. Plant functional type (PFT)-specific uniform prior ranges for land parameters were determined using expert opinion and literature survey, and samples were generated with a quasi-Monte Carlo approach-Sobol sequence. Preliminary analysis of 1024 simulations suggested that four PFT-dependent parameters (including slope of the conductance-photosynthesis relationship, specific leaf area at canopy top, leaf C:N ratio and fraction of leaf N in RuBisco) are the dominant sensitive parameters for photosynthesis, surface energy and water fluxes across most PFTs, but with varying importance rankings. On the other hand, for surface ans sub-surface runoff, PFT-independent parameters, such as the depth-dependent decay factors for runoff, play more important roles than the previous four PFT-dependent parameters. Further analysis by conditioning the results on different seasons and years are being conducted to provide guidance on how climate variability and change might affect such sensitivity. This is the first step toward coupled simulations including biogeochemical processes, atmospheric processes
Random vibration sensitivity studies of modeling uncertainties in the NIF structures
Swensen, E.A.; Farrar, C.R.; Barron, A.A.; Cornwell, P.
1996-12-31
The National Ignition Facility is a laser fusion project that will provide an above-ground experimental capability for nuclear weapons effects simulation. This facility will achieve fusion ignition utilizing solid-state lasers as the energy driver. The facility will cover an estimated 33,400 m{sup 2} at an average height of 5--6 stories. Within this complex, a number of beam transport structures will be houses that will deliver the laser beams to the target area within a 50 {micro}m ms radius of the target center. The beam transport structures are approximately 23 m long and reach approximately heights of 2--3 stories. Low-level ambient random vibrations are one of the primary concerns currently controlling the design of these structures. Low level ambient vibrations, 10{sup {minus}10} g{sup 2}/Hz over a frequency range of 1 to 200 Hz, are assumed to be present during all facility operations. Each structure described in this paper will be required to achieve and maintain 0.6 {micro}rad ms laser beam pointing stability for a minimum of 2 hours under these vibration levels. To date, finite element (FE) analysis has been performed on a number of the beam transport structures. Certain assumptions have to be made regarding structural uncertainties in the FE models. These uncertainties consist of damping values for concrete and steel, compliance within bolted and welded joints, and assumptions regarding the phase coherence of ground motion components. In this paper, the influence of these structural uncertainties on the predicted pointing stability of the beam line transport structures as determined by random vibration analysis will be discussed.
NASA Astrophysics Data System (ADS)
Mateus, C.; Tullos, D.
2014-12-01
This study investigated how reservoir performance varied across different hydrogeologic settings and under plausible future climate scenarios. The study was conducted in the Santiam River basin, OR, USA, comparing the North Santiam basin (NSB), with high permeability and extensive groundwater storage, and the South Santiam basin (SSB), with low permeability, little groundwater storage, and rapid runoff response. We applied projections of future temperature and precipitation from global climate models to a rainfall-runoff model, coupled with a formal Bayesian uncertainty analysis, to project future inflow hydrographs as inputs to a reservoir operations model. The performance of reservoir operations was evaluated as the reliability in meeting flood management, spring and summer environmental flows, and hydropower generation objectives. Despite projected increases in winter flows and decreases in summer flows, results suggested little evidence of a response in reservoir operation performance to a warming climate, with the exception of summer flow targets in the SSB. Independent of climate impacts, historical prioritization of reservoir operations appeared to impact reliability, suggesting areas where operation performance may be improved. Results also highlighted how hydrologic uncertainty is likely to complicate planning for climate change in basins with substantial groundwater interactions.
Sensitivity of the photolysis rate to the uncertainties in spectral solar irradiance variability
NASA Astrophysics Data System (ADS)
Sukhodolov, Timofei; Rozanov, Eugene; Bais, Alkiviadis; Tourpali, Kleareti; Shapiro, Alexander; Telford, Paul; Peter, Thomas; Schmutz, Werner
2014-05-01
The state of the stratospheric ozone layer and temperature structure are mostly maintained by the photolytical processes. Therefore, the uncertainties in the magnitude and spectral composition of the spectral solar irradiance (SSI) evolution during the declining phase of 23rd solar cycle have substantial implications for the modeling of the middle atmosphere evolution, leading not only to a pronounced differences in the heating rates but also affecting photolysis rates. To estimate the role of SSI uncertainties we have compared the most important photolysis rates (O2, O3, and NO2) calculated with the reference radiation code libRadtran using SSI for June 2004 and February 2009 obtained from two models (NRL, COSI) and one observation data set based on SORCE observations. We found that below 40 km changes in the ozone and oxygen photolysis can reach several tenths of % caused by the changes of the SSI in the Harley and Huggins bands for ozone and several % for oxygen caused by the changes of the SSI in the Herzberg continuum and Schumann-Runge bands. For the SORCE data set these changes are 2-4 times higher. We have also evaluated the ability of the several photolysis rates calculation methods widely used in atmospheric models to reproduce the absolute values of the photolysis rates and their response to the implied SSI changes. With some remarks all schemes show good results in the middle stratosphere compare to libRadtran. However, in the troposphere and mesosphere there are more noticeable differences.
Rearden, Bradley T; Duhamel, Isabelle; Letang, Eric
2009-01-01
New TSUNAMI tools of SCALE 6, TSURFER and TSAR, are demonstrated to examine the bias effects of small-worth test materials, relative to reference experiments. TSURFER is a data adjustment bias and bias uncertainty assessment tool, and TSAR computes the sensitivity of the change in reactivity between two systems to the cross-section data common to their calculation. With TSURFER, it is possible to examine biases and bias uncertainties in fine detail. For replacement experiments, the application of TSAR to TSUNAMI-3D sensitivity data for pairs of experiments allows the isolation of sources of bias that could otherwise be obscured by materials with more worth in an individual experiment. The application of TSUNAMI techniques in the design of nine reference experiments for the MIRTE program will allow application of these advanced techniques to data acquired in the experimental series. The validation of all materials in a complex criticality safety application likely requires consolidating information from many different critical experiments. For certain materials, such as structural materials or fission products, only a limited number of critical experiments are available, and the fuel and moderator compositions of the experiments may differ significantly from those of the application. In these cases, it is desirable to extract the computational bias of a specific material from an integral keff measurement and use that information to quantify the bias due to the use of the same material in the application system. Traditional parametric and nonparametric methods are likely to prove poorly suited for such a consolidation of specific data components from a diverse set of experiments. An alternative choice for consolidating specific data from numerous sources is a data adjustment tool, like the ORNL tool TSURFER (Tool for Sensitivity/Uncertainty analysis of Response Functionals using Experimental Results) from SCALE 6.1 However, even with TSURFER, it may be difficult to
Nichols, W.E.; Freshley, M.D.
1991-10-01
This report documents the results of sensitivity and uncertainty analyses conducted to improve understanding of unsaturated zone ground-water travel time distribution at Yucca Mountain, Nevada. The US Department of Energy (DOE) is currently performing detailed studies at Yucca Mountain to determine its suitability as a host for a geologic repository for the containment of high-level nuclear wastes. As part of these studies, DOE is conducting a series of Performance Assessment Calculational Exercises, referred to as the PACE problems. The work documented in this report represents a part of the PACE-90 problems that addresses the effects of natural barriers of the site that will stop or impede the long-term movement of radionuclides from the potential repository to the accessible environment. In particular, analyses described in this report were designed to investigate the sensitivity of the ground-water travel time distribution to different input parameters and the impact of uncertainty associated with those input parameters. Five input parameters were investigated in this study: recharge rate, saturated hydraulic conductivity, matrix porosity, and two curve-fitting parameters used for the van Genuchten relations to quantify the unsaturated moisture-retention and hydraulic characteristics of the matrix. 23 refs., 20 figs., 10 tabs.
Ligmann-Zielinska, Arika; Kramer, Daniel B.; Spence Cheruvelil, Kendra; Soranno, Patricia A.
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764
Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764
Kearfott, K.J.; Samei, E.; Han, S.
1995-03-01
An error analysis of the effects of the algorithms used to resolve the deep and shallow dose components for mixed fields from multi-element thermoluminescent (TLD) badge systems was undertaken for a commonly used system. Errors were introduced independently into each of the four element readings for a badge, and the effects on the calculated dose equivalents were observed. A normal random number generator was then utilized to introduce simultaneous variations in the element readings for different uncertainties. The Department of Energy Laboratory Accrediatation Program radiation fields were investigated. Problems arising from the discontinuous nature of the algorithm were encountered for a number of radiation sources for which the algorithm misidentified the radiation field. Mixed fields of low energy photons and betas were found to present particular difficulties for the algorithm. The study demonstrates the importance of small fluctuations in the TLD element`s response in a multi-element approach. 24 refs., 5 figs., 7 tabs.
NASA Astrophysics Data System (ADS)
Hill, Mary
2016-04-01
Combining different data types can seem like combining apples and oranges. Yet combining different data types into inverse modeling and uncertainty quantification are important in all types of environmental systems. There are two main methods for combining different data types. - Single objective optimization (SOO) with weighting. - Multi-objective optimization (MOO) in which coefficients for data groups are defined and changed during model development. SOO and MOO are related in that different coefficient values in MOO are equivalent to considering alternative weightings. MOO methods often take many model runs and tend to be much more computationally expensive than SOO, but for SOO the weighting needs to be defined. When alternative models are more important to consider than alternate weightings, SOO can be advantageous (Lu et al. 2012). This presentation considers how to determine the weighting when using SOO. A saltwater intrusion example is used to examine two methods of weighting three data types. The two methods of determining weighting are based on contributions to the objective function, as suggested by Anderson et al. (2015) and error-based weighting, as suggested by Hill and Tiedeman (2007). The consequences of weighting on measures of uncertainty, the importance and interdependence of parameters, and the importance of observations are presented. This work is important to many types of environmental modeling, including climate models, because integrating many kinds of data is often important. The advent of rainfall-runoff models with fewer numerical deamons, such as TOPKAPI and SUMMA, make the convenient model analysis methods used in this work more useful for many hydrologic problems.
Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen
2006-01-01
This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a
Bignell, L J; Mo, L; Alexiev, D; Hashemi-Nezhad, S R
2010-01-01
Radiation transport simulations of the most probable gamma- and X-ray emissions of (123)I and (54)Mn in a three photomultiplier tube liquid scintillation detector have been carried out. A Geant4 simulation was used to acquire energy deposition spectra and interaction probabilities with the scintillant, as required for absolute activity measurement using the triple to double coincidence ratio (TDCR) method. A sensitivity and uncertainty analysis of the simulation model is presented here. The uncertainty in the Monte Carlo simulation results due to the input parameter uncertainties was found to be more significant than the statistical uncertainty component for a typical number of simulated decay events. The model was most sensitive to changes in the volume of the scintillant. Estimates of the relative uncertainty associated with the simulation outputs due to the combined stochastic and input uncertainties are provided. A Monte Carlo uncertainty analysis of an (123)I TDCR measurement indicated that accounting for the simulation uncertainties increases the uncertainty of efficiency of the logical sum of double coincidence by 5.1%. PMID:20036571
Parameter sensitivity and uncertainty in SWAT: A comparison across five USDA-ARS watersheds
Technology Transfer Automated Retrieval System (TEKTRAN)
The USDA-ARS Conservation Effects Assessment Project (CEAP) calls for improved understanding of the strengths and weaknesses of watershed-scale, water quality models under a range of climatic, soil, topographic, and land use conditions. Assessing simulation model parameter sensitivity helps establi...
NASA Astrophysics Data System (ADS)
Khodayar-Pardo, Samiro; Lopez-Baeza, Ernesto; Coll Pajaron, M. Amparo
Sensitivity of seasonal weather prediction and extreme precipitation events to soil moisture initialization uncertainty using SMOS soil moisture products (1) S. Khodayar, (2) A. Coll, (2) E. Lopez-Baeza (1) Institute for Meteorology and Climate Research, Karlsruhe Institute of Technology (KIT), Karlsruhe Germany (2) University of Valencia. Earth Physics and Thermodynamics Department. Climatology from Satellites Group Soil moisture is an important variable in agriculture, hydrology, meteorology and related disciplines. Despite its importance, it is complicated to obtain an appropriate representation of this variable, mainly because of its high temporal and spatial variability. SVAT (Soil-Vegetation-Atmosphere-Transfer) models can be used to simulate the temporal behaviour and spatial distribution of soil moisture in a given area and/or state of the art products such as the soil moisture measurements from the SMOS (Soil Moisture and Ocean Salinity) space mission may be also convenient. The potential role of soil moisture initialization and associated uncertainty in numerical weather prediction is illustrated in this study through sensitivity numerical experiments using the SVAT SURFEX model and the non-hydrostatic COSMO model. The aim of this investigation is twofold, (a) to demonstrate the sensitivity of model simulations of convective precipitation to soil moisture initial uncertainty, as well as the impact on the representation of extreme precipitation events, and (b) to assess the usefulness of SMOS soil moisture products to improve the simulation of water cycle components and heavy precipitation events. Simulated soil moisture and precipitation fields are compared with observations and with level-1(~1km), level-2(~15 km) and level-3(~35 km) soil moisture maps generated from SMOS over the Iberian Peninsula, the SMOS validation area (50 km x 50 km, eastern Spain) and selected stations, where in situ measurements are available covering different vegetation cover
NASA Astrophysics Data System (ADS)
Brown, Tristan R.
The revised Renewable Fuel Standard requires the annual blending of 16 billion gallons of cellulosic biofuel by 2022 from zero gallons in 2009. The necessary capacity investments have been underwhelming to date, however, and little is known about the likely composition of the future cellulosic biofuel industry as a result. This dissertation develops a framework for identifying and analyzing the industry's likely future composition while also providing a possible explanation for why investment in cellulosic biofuels capacity has been low to date. The results of this dissertation indicate that few cellulosic biofuel pathways will be economically competitive with petroleum on an unsubsidized basis. Of five cellulosic biofuel pathways considered under 20-year price forecasts with volatility, only two achieve positive mean 20-year net present value (NPV) probabilities. Furthermore, recent exploitation of U.S. shale gas reserves and the subsequent fall in U.S. natural gas prices have negatively impacted the economic competitiveness of all but two of the cellulosic biofuel pathways considered; only two of the five pathways achieve substantially higher 20-year NPVs under a post-shale gas economic scenario relative to a pre-shale gas scenario. The economic competitiveness of cellulosic biofuel pathways with petroleum is reduced further when considered under price uncertainty in combination with realistic financial assumptions. This dissertation calculates pathway-specific costs of capital for five cellulosic biofuel pathway scenarios. The analysis finds that the large majority of the scenarios incur costs of capital that are substantially higher than those commonly assumed in the literature. Employment of these costs of capital in a comparative TEA greatly reduces the mean 20-year NPVs for each pathway while increasing their 10-year probabilities of default to above 80% for all five scenarios. Finally, this dissertation quantifies the economic competitiveness of six
A practical method to assess model sensitivity and parameter uncertainty in C cycle models
NASA Astrophysics Data System (ADS)
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
2015-04-01
The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary
NASA Astrophysics Data System (ADS)
Mockler, Eva M.; O'Loughlin, Fiachra E.; Bruen, Michael
2016-05-01
Increasing pressures on water quality due to intensification of agriculture have raised demands for environmental modeling to accurately simulate the movement of diffuse (nonpoint) nutrients in catchments. As hydrological flows drive the movement and attenuation of nutrients, individual hydrological processes in models should be adequately represented for water quality simulations to be meaningful. In particular, the relative contribution of groundwater and surface runoff to rivers is of interest, as increasing nitrate concentrations are linked to higher groundwater discharges. These requirements for hydrological modeling of groundwater contribution to rivers initiated this assessment of internal flow path partitioning in conceptual hydrological models. In this study, a variance based sensitivity analysis method was used to investigate parameter sensitivities and flow partitioning of three conceptual hydrological models simulating 31 Irish catchments. We compared two established conceptual hydrological models (NAM and SMARG) and a new model (SMART), produced especially for water quality modeling. In addition to the criteria that assess streamflow simulations, a ratio of average groundwater contribution to total streamflow was calculated for all simulations over the 16 year study period. As observations time-series of groundwater contributions to streamflow are not available at catchment scale, the groundwater ratios were evaluated against average annual indices of base flow and deep groundwater flow for each catchment. The exploration of sensitivities of internal flow path partitioning was a specific focus to assist in evaluating model performances. Results highlight that model structure has a strong impact on simulated groundwater flow paths. Sensitivity to the internal pathways in the models are not reflected in the performance criteria results. This demonstrates that simulated groundwater contribution should be constrained by independent data to ensure results
Albrecht, Achim; Miquel, Stéphan
2010-01-01
Biosphere dose conversion factors are computed for the French high-level geological waste disposal concept and to illustrate the combined probabilistic and deterministic approach. Both (135)Cs and (79)Se are used as examples. Probabilistic analyses of the system considering all parameters, as well as physical and societal parameters independently, allow quantification of their mutual impact on overall uncertainty. As physical parameter uncertainties decreased, for example with the availability of further experimental and field data, the societal uncertainties, which are less easily constrained, particularly for the long term, become more and more significant. One also has to distinguish uncertainties impacting the low dose portion of a distribution from those impacting the high dose range, the latter having logically a greater impact in an assessment situation. The use of cumulative probability curves allows us to quantify probability variations as a function of the dose estimate, with the ratio of the probability variation (slope of the curve) indicative of uncertainties of different radionuclides. In the case of (135)Cs with better constrained physical parameters, the uncertainty in human behaviour is more significant, even in the high dose range, where they increase the probability of higher doses. For both radionuclides, uncertainties impact more strongly in the intermediate than in the high dose range. In an assessment context, the focus will be on probabilities of higher dose values. The probabilistic approach can furthermore be used to construct critical groups based on a predefined probability level and to ensure that critical groups cover the expected range of uncertainty. PMID:19758732
Hostetler, S.; Pisias, N.; Mix, A.
2006-01-01
The faunal and floral gradients that underlie the CLIMAP (1981) sea-surface temperature (SST) reconstructions for the Last Glacial Maximum (LGM) reflect ocean temperature gradients and frontal positions. The transfer functions used to reconstruct SSTs from biologic gradients are biased, however, because at the warmest sites they display inherently low sensitivity in translating fauna to SST and they underestimate SST within the euphotic zones where the pycnocline is strong. Here we assemble available data and apply a statistical approach to adjust for hypothetical biases in the faunal-based SST estimates of LGM temperature. The largest bias adjustments are distributed in the tropics (to address low sensitivity) and subtropics (to address underestimation in the euphotic zones). The resulting SSTs are generally in better agreement than CLIMAP with recent geochemical estimates of glacial-interglacial temperature changes. We conducted a series of model experiments using the GENESIS general atmospheric circulation model to assess the sensitivity of the climate system to our bias-adjusted SSTs. Globally, the new SST field results in a modeled LGM surface-air cooling relative to present of 6.4 ??C (1.9 ??C cooler than that of CLIMAP). Relative to the simulation with CLIMAP SSTs, modeled precipitation over the oceans is reduced by 0.4 mm d-1 (an anomaly -0.4 versus 0.0 mm d-1 for CLIMAP) and increased over land (an anomaly -0.2 versus -0.5 mm d-1 for CLIMAP). Regionally strong responses are induced by changes in SST gradients. Data-model comparisons indicate improvement in agreement relative to CLIMAP, but differences among terrestrial data inferences and simulated moisture and temperature remain. Our SSTs result in positive mass balance over the northern hemisphere ice sheets (primarily through reduced summer ablation), supporting the hypothesis that tropical and subtropical ocean temperatures may have played a role in triggering glacial changes at higher latitudes.
J. Zhu; K. Pohlmann; J. Chapman; C. Russell; R.W.H. Carroll; D. Shafer
2009-09-10
Yucca Mountain (YM), Nevada, has been proposed by the U.S. Department of Energy as the nation’s first permanent geologic repository for spent nuclear fuel and highlevel radioactive waste. In this study, the potential for groundwater advective pathways from underground nuclear testing areas on the Nevada Test Site (NTS) to intercept the subsurface of the proposed land withdrawal area for the repository is investigated. The timeframe for advective travel and its uncertainty for possible radionuclide movement along these flow pathways is estimated as a result of effective-porosity value uncertainty for the hydrogeologic units (HGUs) along the flow paths. Furthermore, sensitivity analysis is conducted to determine the most influential HGUs on the advective radionuclide travel times from the NTS to the YM area. Groundwater pathways are obtained using the particle tracking package MODPATH and flow results from the Death Valley regional groundwater flow system (DVRFS) model developed by the U.S. Geological Survey (USGS). Effectiveporosity values for HGUs along these pathways are one of several parameters that determine possible radionuclide travel times between the NTS and proposed YM withdrawal areas. Values and uncertainties of HGU porosities are quantified through evaluation of existing site effective-porosity data and expert professional judgment and are incorporated in the model through Monte Carlo simulations to estimate mean travel times and uncertainties. The simulations are based on two steady-state flow scenarios, the pre-pumping (the initial stress period of the DVRFS model), and the 1998 pumping (assuming steady-state conditions resulting from pumping in the last stress period of the DVRFS model) scenarios for the purpose of long-term prediction and monitoring. The pumping scenario accounts for groundwater withdrawal activities in the Amargosa Desert and other areas downgradient of YM. Considering each detonation in a clustered region around Pahute Mesa (in
NASA Astrophysics Data System (ADS)
DeAngelis, A. M.; Qu, X.; Hall, A. D.; Klein, S. A.
2014-12-01
The hydrological cycle is expected to undergo substantial changes in response to global warming, with all climate models predicting an increase in global-mean precipitation. There is considerable spread among models, however, in the projected increase of global-mean precipitation, even when normalized by surface temperature change. In an attempt to develop a better physical understanding of the causes of this intermodel spread, we investigate the rapid and temperature-mediated responses of global-mean precipitation to CO2 forcing in an ensemble of CMIP5 models by applying regression analysis to pre-industrial and abrupt quadrupled CO2 simulations, and focus on the atmospheric radiative terms that balance global precipitation. The intermodel spread in the temperature-mediated component, which dominates the spread in total hydrological sensitivity, is highly correlated with the spread in temperature-mediated clear-sky shortwave (SW) atmospheric heating among models. Upon further analysis of the sources of intermodel variability in SW heating, we find that increases of upper atmosphere and (to a lesser extent) total column water vapor in response to 1K surface warming only partly explain intermodel differences in the SW response. Instead, most of the spread in the SW heating term is explained by intermodel differences in the sensitivity of SW absorption to fixed changes in column water vapor. This suggests that differences in SW radiative transfer codes among models are the dominant source of variability in the response of atmospheric SW heating to warming. Better understanding of the SW heating sensitivity to water vapor in climate models appears to be critical for reducing uncertainty in the global hydrological response to future warming. Current work entails analysis of observations to potentially constrain the intermodel spread in SW sensitivity to water vapor, as well as more detailed investigation of the radiative transfer schemes in different models and how
NASA Astrophysics Data System (ADS)
Kong, Song-Charng; Reitz, Rolf D.
2003-06-01
This study used a numerical model to investigate the combustion process in a premixed iso-octane homogeneous charge compression ignition (HCCI) engine. The engine was a supercharged Cummins C engine operated under HCCI conditions. The CHEMKIN code was implemented into an updated KIVA-3V code so that the combustion could be modelled using detailed chemistry in the context of engine CFD simulations. The model was able to accurately simulate the ignition timing and combustion phasing for various engine conditions. The unburned hydrocarbon emissions were also well predicted while the carbon monoxide emissions were under predicted. Model results showed that the majority of unburned hydrocarbon is located in the piston-ring crevice region and the carbon monoxide resides in the vicinity of the cylinder walls. A sensitivity study of the computational grid resolution indicated that the combustion predictions were relatively insensitive to the grid density. However, the piston-ring crevice region needed to be simulated with high resolution to obtain accurate emissions predictions. The model results also indicated that HCCI combustion and emissions are very sensitive to the initial mixture temperature. The computations also show that the carbon monoxide emissions prediction can be significantly improved by modifying a key oxidation reaction rate constant.
Peterson, Kara J.; Bochev, Pavel Blagoveston; Paskaleva, Biliana S.
2010-09-01
Arctic sea ice is an important component of the global climate system and due to feedback effects the Arctic ice cover is changing rapidly. Predictive mathematical models are of paramount importance for accurate estimates of the future ice trajectory. However, the sea ice components of Global Climate Models (GCMs) vary significantly in their prediction of the future state of Arctic sea ice and have generally underestimated the rate of decline in minimum sea ice extent seen over the past thirty years. One of the contributing factors to this variability is the sensitivity of the sea ice to model physical parameters. A new sea ice model that has the potential to improve sea ice predictions incorporates an anisotropic elastic-decohesive rheology and dynamics solved using the material-point method (MPM), which combines Lagrangian particles for advection with a background grid for gradient computations. We evaluate the variability of the Los Alamos National Laboratory CICE code and the MPM sea ice code for a single year simulation of the Arctic basin using consistent ocean and atmospheric forcing. Sensitivities of ice volume, ice area, ice extent, root mean square (RMS) ice speed, central Arctic ice thickness, and central Arctic ice speed with respect to ten different dynamic and thermodynamic parameters are evaluated both individually and in combination using the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA). We find similar responses for the two codes and some interesting seasonal variability in the strength of the parameters on the solution.
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.; Jakeman, John Davis; Swiler, Laura Painton; Stephens, John Adam; Vigil, Dena M.; Wildey, Timothy Michael; Bohnhoff, William J.; Eddy, John P.; Hu, Kenneth T.; Dalbey, Keith R.; Bauman, Lara E; Hough, Patricia Diane
2014-05-01
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Ward, R.C.; Kocher, D.C.; Hicks, B.B.; Hosker, R.P. Jr.; Ku, J.Y.; Rao, K.S.
1985-01-01
We have studied the sensitivity of results from the CRAC2 computer code, which predicts health impacts from a reactor-accident scenario, to uncertainties in selected meteorological models and parameters. The sources of uncertainty examined include the models for plume rise and wet deposition and the meteorological bin-sampling procedure. An alternative plume-rise model usually had little effect on predicted health impacts. In an alternative wet-deposition model, the scavenging rate depends only on storm type, rather than on rainfall rate and atmospheric stability class as in the CRAC2 model. Use of the alternative wet-deposition model in meteorological bin-sampling runs decreased predicted mean early injuries by as much as a factor of 2-3 and, for large release heights and sensible heat rates, decreased mean early fatalities by nearly an order of magnitude. The bin-sampling procedure in CRAC2 was expanded by dividing each rain bin into four bins that depend on rainfall rate. Use of the modified bin structure in conjunction with the CRAC2 wet-deposition model changed all predicted health impacts by less than a factor of 2. 9 references.
NASA Astrophysics Data System (ADS)
Schneider, Sébastien; Jacques, Diederik; Mallants, Dirk
2010-05-01
Numerical models are of precious help for predicting water fluxes in the vadose zone and more specifically in Soil-Vegetation-Atmosphere (SVA) systems. For such simulations, robust models and representative soil hydraulic parameters are required. Calibration of unsaturated hydraulic properties is known to be a difficult optimization problem due to the high non-linearity of the water flow equations. Therefore, robust methods are needed to avoid the optimization process to lead to non-optimal parameters. Evolutionary algorithms and specifically genetic algorithms (GAs) are very well suited for those complex parameter optimization problems. Additionally, GAs offer the opportunity to assess the confidence in the hydraulic parameter estimations, because of the large number of model realizations. The SVA system in this study concerns a pine stand on a heterogeneous sandy soil (podzol) in the Campine region in the north of Belgium. Throughfall and other meteorological data and water contents at different soil depths have been recorded during one year at a daily time step in two lysimeters. The water table level, which is varying between 95 and 170 cm, has been recorded with intervals of 0.5 hour. The leaf area index was measured as well at some selected time moments during the year in order to evaluate the energy which reaches the soil and to deduce the potential evaporation. Water contents at several depths have been recorded. Based on the profile description, five soil layers have been distinguished in the podzol. Two models have been used for simulating water fluxes: (i) a mechanistic model, the HYDRUS-1D model, which solves the Richards' equation, and (ii) a compartmental model, which treats the soil profile as a bucket into which water flows until its maximum capacity is reached. A global sensitivity analysis (Morris' one-at-a-time sensitivity analysis) was run previously to the calibration, in order to check the sensitivity in the chosen parameter search space. For
NASA Astrophysics Data System (ADS)
Challinor, A. J.
2010-12-01
Recent progress in assessing the impacts of climate variability and change on crops using multiple regional-scale simulations of crop and climate (i.e. ensembles) is presented. Simulations for India and China used perturbed responses to elevated carbon dioxide constrained using observations from FACE studies and controlled environments. Simulations with crop parameter sets representing existing and potential future adapted varieties were also carried out. The results for India are compared to sensitivity tests on two other crop models. For China, a parallel approach used socio-economic data to account for autonomous farmer adaptation. Results for the USA analysed cardinal temperatures under a range of local warming scenarios for 2711 varieties of spring wheat. The results are as follows: 1. Quantifying and reducing uncertainty. The relative contribution of uncertainty in crop and climate simulation to the total uncertainty in projected yield changes is examined. The observational constraints from FACE and controlled environment studies are shown to be the likely critical factor in maintaining relatively low crop parameter uncertainty. Without these constraints, crop simulation uncertainty in a doubled CO2 environment would likely be greater than uncertainty in simulating climate. However, consensus across crop models in India varied across different biophysical processes. 2. The response of yield to changes in local mean temperature was examined and compared to that found in the literature. No consistent response to temperature change was found across studies. 3. Implications for adaptation. China. The simulations of spring wheat in China show the relative importance of tolerance to water and heat stress in avoiding future crop failures. The greatest potential for reducing the number of harvests less than one standard deviation below the baseline mean yield value comes from alleviating water stress; the greatest potential for reducing harvests less than two
ERIC Educational Resources Information Center
Carr, Richard; McLean, Doug
1995-01-01
Discusses how educational-facility maintenance departments can cut costs in floor cleaning through careful evaluation of floor equipment and products. Tips for choosing carpet detergents are highlighted. (GR)
Tanner, Jean Paul; Salemi, Jason L; Stuart, Amy L; Yu, Haofei; Jordan, Melissa M; DuClos, Chris; Cavicchia, Philip; Correia, Jane A; Watkins, Sharon M; Kirby, Russell S
2016-05-01
We investigate uncertainty in estimates of pregnant women's exposure to ambient PM2.5 and benzene derived from central-site monitoring data. Through a study of live births in Florida during 2000-2009, we discuss the selection of spatial and temporal scales of analysis, limiting distances, and aggregation method. We estimate exposure concentrations and classify exposure for a range of alternatives, and compare impacts. Estimated exposure concentrations were most sensitive to the temporal scale of analysis for PM2.5, with similar sensitivity to spatial scale for benzene. Using 1-12 versus 3-8 weeks of gestational age as the exposure window resulted in reclassification of exposure by at least one quartile for up to 37% of mothers for PM2.5 and 27% for benzene. The largest mean absolute differences in concentration resulting from any decision were 0.78 µg/m(3) and 0.44 ppbC, respectively. No bias toward systematically higher or lower estimates was found between choices for any decision. PMID:27246278
Davidson, J.W.; Dudziak, D.J.; Higgs, C.E.; Stepanek, J.
1988-01-01
AARE, a code package to perform Advanced Analysis for Reactor Engineering, is a linked modular system for fission reactor core and shielding, as well as fusion blanket, analysis. Its cross-section sensitivity and uncertainty path presently includes the cross-section processing and reformatting code TRAMIX, cross-section homogenization and library reformatting code MIXIT, the 1-dimensional transport code ONEDANT, the 2-dimensional transport code TRISM, and the 1- and 2- dimensional cross-section sensitivity and uncertainty code SENSIBL. IN the present work, a short description of the whole AARE system is given, followed by a detailed description of the cross-section sensitivity and uncertainty path. 23 refs., 2 figs.
FIRST FLOOR FRONT ROOM. SECOND FLOOR HAS BEEN REMOVED NOTE ...
FIRST FLOOR FRONT ROOM. SECOND FLOOR HAS BEEN REMOVED-- NOTE PRESENCE OF SECOND FLOOR WINDOWS (THE LATTER FLOOR WAS REMOVED MANY YEARS AGO), See also PA-1436 B-12 - Kid-Physick House, 325 Walnut Street, Philadelphia, Philadelphia County, PA
Sung, Yixing; Adams, Brian M.; Secker, Jeffrey R.
2011-12-01
The CASL Level 1 Milestone CASL.P4.01, successfully completed in December 2011, aimed to 'conduct, using methodologies integrated into VERA, a detailed sensitivity analysis and uncertainty quantification of a crud-relevant problem with baseline VERA capabilities (ANC/VIPRE-W/BOA).' The VUQ focus area led this effort, in partnership with AMA, and with support from VRI. DAKOTA was coupled to existing VIPRE-W thermal-hydraulics and BOA crud/boron deposit simulations representing a pressurized water reactor (PWR) that previously experienced crud-induced power shift (CIPS). This work supports understanding of CIPS by exploring the sensitivity and uncertainty in BOA outputs with respect to uncertain operating and model parameters. This report summarizes work coupling the software tools, characterizing uncertainties, and analyzing the results of iterative sensitivity and uncertainty studies. These studies focused on sensitivity and uncertainty of CIPS indicators calculated by the current version of the BOA code used in the industry. Challenges with this kind of analysis are identified to inform follow-on research goals and VERA development targeting crud-related challenge problems.
NASA Astrophysics Data System (ADS)
Wang, H.; Rasch, P. J.; Easter, R. C.; Singh, B.; Qian, Y.; Ma, P.; Zhang, R.
2013-12-01
, export to emission ratio) of CA emitted from a number of predefined source regions/sectors, establish quantitative aerosol source-receptor relationships, and characterize source-to-receptor transport pathways. We can quantify the sensitivity of atmospheric CA concentrations and surface deposition in receptor regions of interest (including but not limited to the Arctic) to uncertainties in emissions of particular sources without actually perturbing the emissions, which is required by some other strategies for determining source-receptor relationships. Our study shows that the Arctic BC is much more sensitive to high-latitude local emissions than to mid-latitude major source contributors. For example, the same amount of BC emission from East Asia, which contributes about 20% to the annual mean BC loading in the Arctic, is 40 times less efficient than from the local sources to increase the Arctic BC. This indicates that the local BC sources (e.g., fires, metal smelting and gas flaring), which are highly uncertain or even missing from popular emission inventories, at least partly explain the historical under-prediction of Arctic BC in many climate models. The established source-receptor relationships will be used to assess potential climate impacts of the emission uncertainties.
Bartine, D.E.; Cacuci, D.G.
1983-09-13
This paper describes sources of uncertainty in the data used for calculating dose estimates for the Hiroshima explosion and details a methodology for systematically obtaining best estimates and reduced uncertainties for the radiation doses received. (ACR)
LeBouthillier, Daniel M; Asmundson, Gordon J G
2015-01-01
Several mechanisms have been posited for the anxiolytic effects of exercise, including reductions in anxiety sensitivity through interoceptive exposure. Studies on aerobic exercise lend support to this hypothesis; however, research investigating aerobic exercise in comparison to placebo, the dose-response relationship between aerobic exercise anxiety sensitivity, the efficacy of aerobic exercise on the spectrum of anxiety sensitivity and the effect of aerobic exercise on other related constructs (e.g. intolerance of uncertainty, distress tolerance) is lacking. We explored reductions in anxiety sensitivity and related constructs following a single session of exercise in a community sample using a randomized controlled trial design. Forty-one participants completed 30 min of aerobic exercise or a placebo stretching control. Anxiety sensitivity, intolerance of uncertainty and distress tolerance were measured at baseline, post-intervention and 3-day and 7-day follow-ups. Individuals in the aerobic exercise group, but not the control group, experienced significant reductions with moderate effect sizes in all dimensions of anxiety sensitivity. Intolerance of uncertainty and distress tolerance remained unchanged in both groups. Our trial supports the efficacy of aerobic exercise in uniquely reducing anxiety sensitivity in individuals with varying levels of the trait and highlights the importance of empirically validating the use of aerobic exercise to address specific mental health vulnerabilities. Aerobic exercise may have potential as a temporary substitute for psychotherapy aimed at reducing anxiety-related psychopathology. PMID:25874370
Curtis, Janelle M.R.
2016-01-01
Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along
Naujokaitis-Lewis, Ilona; Curtis, Janelle M R
2016-01-01
Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along
Helton, J.C.; Bean, J.E.; Butcher, B.M.; Garner, J.W.; Vaughn, P.; Schreiber, J.D.; Swift, P.N.
1993-08-01
Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis, stepwise regression analysis and examination of scatterplots are used in conjunction with the BRAGFLO model to examine two phase flow (i.e., gas and brine) at the Waste Isolation Pilot Plant (WIPP), which is being developed by the US Department of Energy as a disposal facility for transuranic waste. The analyses consider either a single waste panel or the entire repository in conjunction with the following cases: (1) fully consolidated shaft, (2) system of shaft seals with panel seals, and (3) single shaft seal without panel seals. The purpose of this analysis is to develop insights on factors that are potentially important in showing compliance with applicable regulations of the US Environmental Protection Agency (i.e., 40 CFR 191, Subpart B; 40 CFR 268). The primary topics investigated are (1) gas production due to corrosion of steel, (2) gas production due to microbial degradation of cellulosics, (3) gas migration into anhydrite marker beds in the Salado Formation, (4) gas migration through a system of shaft seals to overlying strata, and (5) gas migration through a single shaft seal to overlying strata. Important variables identified in the analyses include initial brine saturation of the waste, stoichiometric terms for corrosion of steel and microbial degradation of cellulosics, gas barrier pressure in the anhydrite marker beds, shaft seal permeability, and panel seal permeability.