NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2016-12-01
Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.
NASA Astrophysics Data System (ADS)
Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.
2017-12-01
Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.
The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and
Parameter Estimation (UA/SA/PE API) (also known as Calibration, Optimization and Sensitivity and Uncertainty (CUSO)) was developed in a joint effort between several members of both ...
Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty
Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.
2016-09-12
Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less
Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.
Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less
The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and Parameter Estimation (UA/SA/PE API) tool development, here fore referred to as the Calibration, Optimization, and Sensitivity and Uncertainty Algorithms API (COSU-API), was initially d...
Sensitivity-Uncertainty Techniques for Nuclear Criticality Safety
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise
2017-08-07
The sensitivity and uncertainty analysis course will introduce students to k eff sensitivity data, cross-section uncertainty data, how k eff sensitivity data and k eff uncertainty data are generated and how they can be used. Discussion will include how sensitivity/uncertainty data can be used to select applicable critical experiments, to quantify a defensible margin to cover validation gaps and weaknesses, and in development of upper subcritical limits.
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
NASA Astrophysics Data System (ADS)
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-03-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Chen, Xingyuan; Ye, Ming
Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level ofmore » the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, S.; Toll, J.; Cothern, K.
1995-12-31
The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less
Analytic uncertainty and sensitivity analysis of models with input correlations
NASA Astrophysics Data System (ADS)
Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu
2018-03-01
Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.
Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas
2014-01-01
GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights. PMID:25843987
Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide
Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...
2017-03-01
The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less
Estimating model predictive uncertainty is imperative to informed environmental decision making and management of water resources. This paper applies the Generalized Sensitivity Analysis (GSA) to examine parameter sensitivity and the Generalized Likelihood Uncertainty Estimation...
Tian, Yuan; Hassmiller Lich, Kristen; Osgood, Nathaniel D; Eom, Kirsten; Matchar, David B
2016-11-01
As health services researchers and decision makers tackle more difficult problems using simulation models, the number of parameters and the corresponding degree of uncertainty have increased. This often results in reduced confidence in such complex models to guide decision making. To demonstrate a systematic approach of linked sensitivity analysis, calibration, and uncertainty analysis to improve confidence in complex models. Four techniques were integrated and applied to a System Dynamics stroke model of US veterans, which was developed to inform systemwide intervention and research planning: Morris method (sensitivity analysis), multistart Powell hill-climbing algorithm and generalized likelihood uncertainty estimation (calibration), and Monte Carlo simulation (uncertainty analysis). Of 60 uncertain parameters, sensitivity analysis identified 29 needing calibration, 7 that did not need calibration but significantly influenced key stroke outcomes, and 24 not influential to calibration or stroke outcomes that were fixed at their best guess values. One thousand alternative well-calibrated baselines were obtained to reflect calibration uncertainty and brought into uncertainty analysis. The initial stroke incidence rate among veterans was identified as the most influential uncertain parameter, for which further data should be collected. That said, accounting for current uncertainty, the analysis of 15 distinct prevention and treatment interventions provided a robust conclusion that hypertension control for all veterans would yield the largest gain in quality-adjusted life years. For complex health care models, a mixed approach was applied to examine the uncertainty surrounding key stroke outcomes and the robustness of conclusions. We demonstrate that this rigorous approach can be practical and advocate for such analysis to promote understanding of the limits of certainty in applying models to current decisions and to guide future data collection. © The Author(s) 2016.
Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat
2016-01-01
The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.
Sensitivity of wildlife habitat models to uncertainties in GIS data
NASA Technical Reports Server (NTRS)
Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.
1992-01-01
Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.
The NASA Langley Multidisciplinary Uncertainty Quantification Challenge
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.
Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS
Brown, C. S.; Zhang, Hongbin
2016-05-24
Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less
3.8 Proposed approach to uncertainty quantification and sensitivity analysis in the next PA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, Greg; Wohlwend, Jen
2017-10-02
This memorandum builds upon Section 3.8 of SRNL (2016) and Flach (2017) by defining key error analysis, uncertainty quantification, and sensitivity analysis concepts and terms, in preparation for the next E-Area Performance Assessment (WSRC 2008) revision.
Wu, Yiping; Liu, Shuguang; Huang, Zhihong; Yan, Wende
2014-01-01
Ecosystem models are useful tools for understanding ecological processes and for sustainable management of resources. In biogeochemical field, numerical models have been widely used for investigating carbon dynamics under global changes from site to regional and global scales. However, it is still challenging to optimize parameters and estimate parameterization uncertainty for complex process-based models such as the Erosion Deposition Carbon Model (EDCM), a modified version of CENTURY, that consider carbon, water, and nutrient cycles of ecosystems. This study was designed to conduct the parameter identifiability, optimization, sensitivity, and uncertainty analysis of EDCM using our developed EDCM-Auto, which incorporated a comprehensive R package—Flexible Modeling Framework (FME) and the Shuffled Complex Evolution (SCE) algorithm. Using a forest flux tower site as a case study, we implemented a comprehensive modeling analysis involving nine parameters and four target variables (carbon and water fluxes) with their corresponding measurements based on the eddy covariance technique. The local sensitivity analysis shows that the plant production-related parameters (e.g., PPDF1 and PRDX) are most sensitive to the model cost function. Both SCE and FME are comparable and performed well in deriving the optimal parameter set with satisfactory simulations of target variables. Global sensitivity and uncertainty analysis indicate that the parameter uncertainty and the resulting output uncertainty can be quantified, and that the magnitude of parameter-uncertainty effects depends on variables and seasons. This study also demonstrates that using the cutting-edge R functions such as FME can be feasible and attractive for conducting comprehensive parameter analysis for ecosystem modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sung, Yixing; Adams, Brian M.; Secker, Jeffrey R.
2011-12-01
The CASL Level 1 Milestone CASL.P4.01, successfully completed in December 2011, aimed to 'conduct, using methodologies integrated into VERA, a detailed sensitivity analysis and uncertainty quantification of a crud-relevant problem with baseline VERA capabilities (ANC/VIPRE-W/BOA).' The VUQ focus area led this effort, in partnership with AMA, and with support from VRI. DAKOTA was coupled to existing VIPRE-W thermal-hydraulics and BOA crud/boron deposit simulations representing a pressurized water reactor (PWR) that previously experienced crud-induced power shift (CIPS). This work supports understanding of CIPS by exploring the sensitivity and uncertainty in BOA outputs with respect to uncertain operating and model parameters. Thismore » report summarizes work coupling the software tools, characterizing uncertainties, and analyzing the results of iterative sensitivity and uncertainty studies. These studies focused on sensitivity and uncertainty of CIPS indicators calculated by the current version of the BOA code used in the industry. Challenges with this kind of analysis are identified to inform follow-on research goals and VERA development targeting crud-related challenge problems.« less
Eigenvalue Contributon Estimator for Sensitivity Calculations with TSUNAMI-3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Williams, Mark L
2007-01-01
Since the release of the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) codes in SCALE [1], the use of sensitivity and uncertainty analysis techniques for criticality safety applications has greatly increased within the user community. In general, sensitivity and uncertainty analysis is transitioning from a technique used only by specialists to a practical tool in routine use. With the desire to use the tool more routinely comes the need to improve the solution methodology to reduce the input and computational burden on the user. This paper reviews the current solution methodology of the Monte Carlo eigenvalue sensitivity analysismore » sequence TSUNAMI-3D, describes an alternative approach, and presents results from both methodologies.« less
USDA-ARS?s Scientific Manuscript database
This paper provides an overview of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) software application, an open-source, Java-based toolbox of visual and numerical analysis components for the evaluation of environmental models. MOUSE is based on the OPTAS model calibration syst...
A methodology to estimate uncertainty for emission projections through sensitivity analysis.
Lumbreras, Julio; de Andrés, Juan Manuel; Pérez, Javier; Borge, Rafael; de la Paz, David; Rodríguez, María Encarnación
2015-04-01
Air pollution abatement policies must be based on quantitative information on current and future emissions of pollutants. As emission projections uncertainties are inevitable and traditional statistical treatments of uncertainty are highly time/resources consuming, a simplified methodology for nonstatistical uncertainty estimation based on sensitivity analysis is presented in this work. The methodology was applied to the "with measures" scenario for Spain, concretely over the 12 highest emitting sectors regarding greenhouse gas and air pollutants emissions. Examples of methodology application for two important sectors (power plants, and agriculture and livestock) are shown and explained in depth. Uncertainty bands were obtained up to 2020 by modifying the driving factors of the 12 selected sectors and the methodology was tested against a recomputed emission trend in a low economic-growth perspective and official figures for 2010, showing a very good performance. A solid understanding and quantification of uncertainties related to atmospheric emission inventories and projections provide useful information for policy negotiations. However, as many of those uncertainties are irreducible, there is an interest on how they could be managed in order to derive robust policy conclusions. Taking this into account, a method developed to use sensitivity analysis as a source of information to derive nonstatistical uncertainty bands for emission projections is presented and applied to Spain. This method simplifies uncertainty assessment and allows other countries to take advantage of their sensitivity analyses.
NASA Astrophysics Data System (ADS)
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.
2017-05-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2017-12-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.
USDA-ARS?s Scientific Manuscript database
For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...
Sweetapple, Christine; Fu, Guangtao; Butler, David
2013-09-01
This study investigates sources of uncertainty in the modelling of greenhouse gas emissions from wastewater treatment, through the use of local and global sensitivity analysis tools, and contributes to an in-depth understanding of wastewater treatment modelling by revealing critical parameters and parameter interactions. One-factor-at-a-time sensitivity analysis is used to screen model parameters and identify those with significant individual effects on three performance indicators: total greenhouse gas emissions, effluent quality and operational cost. Sobol's method enables identification of parameters with significant higher order effects and of particular parameter pairs to which model outputs are sensitive. Use of a variance-based global sensitivity analysis tool to investigate parameter interactions enables identification of important parameters not revealed in one-factor-at-a-time sensitivity analysis. These interaction effects have not been considered in previous studies and thus provide a better understanding wastewater treatment plant model characterisation. It was found that uncertainty in modelled nitrous oxide emissions is the primary contributor to uncertainty in total greenhouse gas emissions, due largely to the interaction effects of three nitrogen conversion modelling parameters. The higher order effects of these parameters are also shown to be a key source of uncertainty in effluent quality. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Benek, John A.; Luckring, James M.
2017-01-01
A NATO symposium held in 2008 identified many promising sensitivity analysis and un-certainty quantification technologies, but the maturity and suitability of these methods for realistic applications was not known. The STO Task Group AVT-191 was established to evaluate the maturity and suitability of various sensitivity analysis and uncertainty quantification methods for application to realistic problems of interest to NATO. The program ran from 2011 to 2015, and the work was organized into four discipline-centric teams: external aerodynamics, internal aerodynamics, aeroelasticity, and hydrodynamics. This paper presents an overview of the AVT-191 program content.
NASA Technical Reports Server (NTRS)
Benek, John A.; Luckring, James M.
2017-01-01
A NATO symposium held in Greece in 2008 identified many promising sensitivity analysis and uncertainty quantification technologies, but the maturity and suitability of these methods for realistic applications was not clear. The NATO Science and Technology Organization, Task Group AVT-191 was established to evaluate the maturity and suitability of various sensitivity analysis and uncertainty quantification methods for application to realistic vehicle development problems. The program ran from 2011 to 2015, and the work was organized into four discipline-centric teams: external aerodynamics, internal aerodynamics, aeroelasticity, and hydrodynamics. This paper summarizes findings and lessons learned from the task group.
USDA-ARS?s Scientific Manuscript database
For several decades, optimization and sensitivity/uncertainty analysis of environmental models has been the subject of extensive research. Although much progress has been made and sophisticated methods developed, the growing complexity of environmental models to represent real-world systems makes it...
ERD’s Supercomputer for Model Uncertainty and Sensitivity Evaluation (SuperMUSE) is a key to enhancing quality assurance in environmental models and applications. Uncertainty analysis and sensitivity analysis remain critical, though often overlooked steps in the development and e...
Samad, Noor Asma Fazli Abdul; Sin, Gürkan; Gernaey, Krist V; Gani, Rafiqul
2013-11-01
This paper presents the application of uncertainty and sensitivity analysis as part of a systematic model-based process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty, while for sensitivity analysis, global methods including the standardized regression coefficients (SRC) and Morris screening are used to identify the most significant parameters. The potassium dihydrogen phosphate (KDP) crystallization process is used as a case study, both in open-loop and closed-loop operation. In the uncertainty analysis, the impact on the predicted output of uncertain parameters related to the nucleation and the crystal growth model has been investigated for both a one- and two-dimensional crystal size distribution (CSD). The open-loop results show that the input uncertainties lead to significant uncertainties on the CSD, with appearance of a secondary peak due to secondary nucleation for both cases. The sensitivity analysis indicated that the most important parameters affecting the CSDs are nucleation order and growth order constants. In the proposed PAT system design (closed-loop), the target CSD variability was successfully reduced compared to the open-loop case, also when considering uncertainty in nucleation and crystal growth model parameters. The latter forms a strong indication of the robustness of the proposed PAT system design in achieving the target CSD and encourages its transfer to full-scale implementation. Copyright © 2013 Elsevier B.V. All rights reserved.
Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.
2015-01-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust decisions.
Perfetti, Christopher M.; Rearden, Bradley T.
2016-03-01
The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less
NASA Astrophysics Data System (ADS)
Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye
2016-03-01
Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.
Use of SUSA in Uncertainty and Sensitivity Analysis for INL VHTR Coupled Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom
2010-06-01
The need for a defendable and systematic Uncertainty and Sensitivity approach that conforms to the Code Scaling, Applicability, and Uncertainty (CSAU) process, and that could be used for a wide variety of software codes, was defined in 2008.The GRS (Gesellschaft für Anlagen und Reaktorsicherheit) company of Germany has developed one type of CSAU approach that is particularly well suited for legacy coupled core analysis codes, and a trial version of their commercial software product SUSA (Software for Uncertainty and Sensitivity Analyses) was acquired on May 12, 2010. This interim milestone report provides an overview of the current status of themore » implementation and testing of SUSA at the INL VHTR Project Office.« less
The effect of uncertainties in distance-based ranking methods for multi-criteria decision making
NASA Astrophysics Data System (ADS)
Jaini, Nor I.; Utyuzhnikov, Sergei V.
2017-08-01
Data in the multi-criteria decision making are often imprecise and changeable. Therefore, it is important to carry out sensitivity analysis test for the multi-criteria decision making problem. The paper aims to present a sensitivity analysis for some ranking techniques based on the distance measures in multi-criteria decision making. Two types of uncertainties are considered for the sensitivity analysis test. The first uncertainty is related to the input data, while the second uncertainty is towards the Decision Maker preferences (weights). The ranking techniques considered in this study are TOPSIS, the relative distance and trade-off ranking methods. TOPSIS and the relative distance method measure a distance from an alternative to the ideal and antiideal solutions. In turn, the trade-off ranking calculates a distance of an alternative to the extreme solutions and other alternatives. Several test cases are considered to study the performance of each ranking technique in both types of uncertainties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, Justin Matthew
These are the slides for a graduate presentation at Mississippi State University. It covers the following: the BRL Shaped-Charge Geometry in PAGOSA, mesh refinement study, surrogate modeling using a radial basis function network (RBFN), ruling out parameters using sensitivity analysis (equation of state study), uncertainty quantification (UQ) methodology, and sensitivity analysis (SA) methodology. In summary, a mesh convergence study was used to ensure that solutions were numerically stable by comparing PDV data between simulations. A Design of Experiments (DOE) method was used to reduce the simulation space to study the effects of the Jones-Wilkins-Lee (JWL) Parameters for the Composition Bmore » main charge. Uncertainty was quantified by computing the 95% data range about the median of simulation output using a brute force Monte Carlo (MC) random sampling method. Parameter sensitivities were quantified using the Fourier Amplitude Sensitivity Test (FAST) spectral analysis method where it was determined that detonation velocity, initial density, C1, and B1 controlled jet tip velocity.« less
Using global sensitivity analysis of demographic models for ecological impact assessment.
Aiello-Lammens, Matthew E; Akçakaya, H Resit
2017-02-01
Population viability analysis (PVA) is widely used to assess population-level impacts of environmental changes on species. When combined with sensitivity analysis, PVA yields insights into the effects of parameter and model structure uncertainty. This helps researchers prioritize efforts for further data collection so that model improvements are efficient and helps managers prioritize conservation and management actions. Usually, sensitivity is analyzed by varying one input parameter at a time and observing the influence that variation has over model outcomes. This approach does not account for interactions among parameters. Global sensitivity analysis (GSA) overcomes this limitation by varying several model inputs simultaneously. Then, regression techniques allow measuring the importance of input-parameter uncertainties. In many conservation applications, the goal of demographic modeling is to assess how different scenarios of impact or management cause changes in a population. This is challenging because the uncertainty of input-parameter values can be confounded with the effect of impacts and management actions. We developed a GSA method that separates model outcome uncertainty resulting from parameter uncertainty from that resulting from projected ecological impacts or simulated management actions, effectively separating the 2 main questions that sensitivity analysis asks. We applied this method to assess the effects of predicted sea-level rise on Snowy Plover (Charadrius nivosus). A relatively small number of replicate models (approximately 100) resulted in consistent measures of variable importance when not trying to separate the effects of ecological impacts from parameter uncertainty. However, many more replicate models (approximately 500) were required to separate these effects. These differences are important to consider when using demographic models to estimate ecological impacts of management actions. © 2016 Society for Conservation Biology.
A multi-model assessment of terrestrial biosphere model data needs
NASA Astrophysics Data System (ADS)
Gardella, A.; Cowdery, E.; De Kauwe, M. G.; Desai, A. R.; Duveneck, M.; Fer, I.; Fisher, R.; Knox, R. G.; Kooper, R.; LeBauer, D.; McCabe, T.; Minunno, F.; Raiho, A.; Serbin, S.; Shiklomanov, A. N.; Thomas, A.; Walker, A.; Dietze, M.
2017-12-01
Terrestrial biosphere models provide us with the means to simulate the impacts of climate change and their uncertainties. Going beyond direct observation and experimentation, models synthesize our current understanding of ecosystem processes and can give us insight on data needed to constrain model parameters. In previous work, we leveraged the Predictive Ecosystem Analyzer (PEcAn) to assess the contribution of different parameters to the uncertainty of the Ecosystem Demography model v2 (ED) model outputs across various North American biomes (Dietze et al., JGR-G, 2014). While this analysis identified key research priorities, the extent to which these priorities were model- and/or biome-specific was unclear. Furthermore, because the analysis only studied one model, we were unable to comment on the effect of variability in model structure to overall predictive uncertainty. Here, we expand this analysis to all biomes globally and a wide sample of models that vary in complexity: BioCro, CABLE, CLM, DALEC, ED2, FATES, G'DAY, JULES, LANDIS, LINKAGES, LPJ-GUESS, MAESPA, PRELES, SDGVM, SIPNET, and TEM. Prior to performing uncertainty analyses, model parameter uncertainties were assessed by assimilating all available trait data from the combination of the BETYdb and TRY trait databases, using an updated multivariate version of PEcAn's Hierarchical Bayesian meta-analysis. Next, sensitivity analyses were performed for all models across a range of sites globally to assess sensitivities for a range of different outputs (GPP, ET, SH, Ra, NPP, Rh, NEE, LAI) at multiple time scales from the sub-annual to the decadal. Finally, parameter uncertainties and model sensitivities were combined to evaluate the fractional contribution of each parameter to the predictive uncertainty for a specific variable at a specific site and timescale. Facilitated by PEcAn's automated workflows, this analysis represents the broadest assessment of the sensitivities and uncertainties in terrestrial models to date, and provides a comprehensive roadmap for constraining model uncertainties through model development and data collection.
Sensitivity and Uncertainty Analysis of the GFR MOX Fuel Subassembly
NASA Astrophysics Data System (ADS)
Lüley, J.; Vrban, B.; Čerba, Š.; Haščík, J.; Nečas, V.; Pelloni, S.
2014-04-01
We performed sensitivity and uncertainty analysis as well as benchmark similarity assessment of the MOX fuel subassembly designed for the Gas-Cooled Fast Reactor (GFR) as a representative material of the core. Material composition was defined for each assembly ring separately allowing us to decompose the sensitivities not only for isotopes and reactions but also for spatial regions. This approach was confirmed by direct perturbation calculations for chosen materials and isotopes. Similarity assessment identified only ten partly comparable benchmark experiments that can be utilized in the field of GFR development. Based on the determined uncertainties, we also identified main contributors to the calculation bias.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
Application of uncertainty and sensitivity analysis to the air quality SHERPA modelling tool
NASA Astrophysics Data System (ADS)
Pisoni, E.; Albrecht, D.; Mara, T. A.; Rosati, R.; Tarantola, S.; Thunis, P.
2018-06-01
Air quality has significantly improved in Europe over the past few decades. Nonetheless we still find high concentrations in measurements mainly in specific regions or cities. This dimensional shift, from EU-wide to hot-spot exceedances, calls for a novel approach to regional air quality management (to complement EU-wide existing policies). The SHERPA (Screening for High Emission Reduction Potentials on Air quality) modelling tool was developed in this context. It provides an additional tool to be used in support to regional/local decision makers responsible for the design of air quality plans. It is therefore important to evaluate the quality of the SHERPA model, and its behavior in the face of various kinds of uncertainty. Uncertainty and sensitivity analysis techniques can be used for this purpose. They both reveal the links between assumptions and forecasts, help in-model simplification and may highlight unexpected relationships between inputs and outputs. Thus, a policy steered SHERPA module - predicting air quality improvement linked to emission reduction scenarios - was evaluated by means of (1) uncertainty analysis (UA) to quantify uncertainty in the model output, and (2) by sensitivity analysis (SA) to identify the most influential input sources of this uncertainty. The results of this study provide relevant information about the key variables driving the SHERPA output uncertainty, and advise policy-makers and modellers where to place their efforts for an improved decision-making process.
Andronis, L; Barton, P; Bryan, S
2009-06-01
To determine how we define good practice in sensitivity analysis in general and probabilistic sensitivity analysis (PSA) in particular, and to what extent it has been adhered to in the independent economic evaluations undertaken for the National Institute for Health and Clinical Excellence (NICE) over recent years; to establish what policy impact sensitivity analysis has in the context of NICE, and policy-makers' views on sensitivity analysis and uncertainty, and what use is made of sensitivity analysis in policy decision-making. Three major electronic databases, MEDLINE, EMBASE and the NHS Economic Evaluation Database, were searched from inception to February 2008. The meaning of 'good practice' in the broad area of sensitivity analysis was explored through a review of the literature. An audit was undertaken of the 15 most recent NICE multiple technology appraisal judgements and their related reports to assess how sensitivity analysis has been undertaken by independent academic teams for NICE. A review of the policy and guidance documents issued by NICE aimed to assess the policy impact of the sensitivity analysis and the PSA in particular. Qualitative interview data from NICE Technology Appraisal Committee members, collected as part of an earlier study, were also analysed to assess the value attached to the sensitivity analysis components of the economic analyses conducted for NICE. All forms of sensitivity analysis, notably both deterministic and probabilistic approaches, have their supporters and their detractors. Practice in relation to univariate sensitivity analysis is highly variable, with considerable lack of clarity in relation to the methods used and the basis of the ranges employed. In relation to PSA, there is a high level of variability in the form of distribution used for similar parameters, and the justification for such choices is rarely given. Virtually all analyses failed to consider correlations within the PSA, and this is an area of concern. Uncertainty is considered explicitly in the process of arriving at a decision by the NICE Technology Appraisal Committee, and a correlation between high levels of uncertainty and negative decisions was indicated. The findings suggest considerable value in deterministic sensitivity analysis. Such analyses serve to highlight which model parameters are critical to driving a decision. Strong support was expressed for PSA, principally because it provides an indication of the parameter uncertainty around the incremental cost-effectiveness ratio. The review and the policy impact assessment focused exclusively on documentary evidence, excluding other sources that might have revealed further insights on this issue. In seeking to address parameter uncertainty, both deterministic and probabilistic sensitivity analyses should be used. It is evident that some cost-effectiveness work, especially around the sensitivity analysis components, represents a challenge in making it accessible to those making decisions. This speaks to the training agenda for those sitting on such decision-making bodies, and to the importance of clear presentation of analyses by the academic community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groen, E.A., E-mail: Evelyne.Groen@gmail.com; Heijungs, R.; Leiden University, Einsteinweg 2, Leiden 2333 CC
Life cycle assessment (LCA) is an established tool to quantify the environmental impact of a product. A good assessment of uncertainty is important for making well-informed decisions in comparative LCA, as well as for correctly prioritising data collection efforts. Under- or overestimation of output uncertainty (e.g. output variance) will lead to incorrect decisions in such matters. The presence of correlations between input parameters during uncertainty propagation, can increase or decrease the the output variance. However, most LCA studies that include uncertainty analysis, ignore correlations between input parameters during uncertainty propagation, which may lead to incorrect conclusions. Two approaches to include correlationsmore » between input parameters during uncertainty propagation and global sensitivity analysis were studied: an analytical approach and a sampling approach. The use of both approaches is illustrated for an artificial case study of electricity production. Results demonstrate that both approaches yield approximately the same output variance and sensitivity indices for this specific case study. Furthermore, we demonstrate that the analytical approach can be used to quantify the risk of ignoring correlations between input parameters during uncertainty propagation in LCA. We demonstrate that: (1) we can predict if including correlations among input parameters in uncertainty propagation will increase or decrease output variance; (2) we can quantify the risk of ignoring correlations on the output variance and the global sensitivity indices. Moreover, this procedure requires only little data. - Highlights: • Ignoring correlation leads to under- or overestimation of the output variance. • We demonstrated that the risk of ignoring correlation can be quantified. • The procedure proposed is generally applicable in life cycle assessment. • In some cases, ignoring correlation has a minimal effect on decision-making tools.« less
PC-BASED SUPERCOMPUTING FOR UNCERTAINTY AND SENSITIVITY ANALYSIS OF MODELS
Evaluating uncertainty and sensitivity of multimedia environmental models that integrate assessments of air, soil, sediments, groundwater, and surface water is a difficult task. It can be an enormous undertaking even for simple, single-medium models (i.e. groundwater only) descr...
NASA Astrophysics Data System (ADS)
Bennett, Katrina E.; Urrego Blanco, Jorge R.; Jonko, Alexandra; Bohn, Theodore J.; Atchley, Adam L.; Urban, Nathan M.; Middleton, Richard S.
2018-01-01
The Colorado River Basin is a fundamentally important river for society, ecology, and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent, and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model. We combine global sensitivity analysis with a space-filling Latin Hypercube Sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach. We find that snow-dominated regions are much more sensitive to uncertainties in VIC parameters. Although baseflow and runoff changes respond to parameters used in previous sensitivity studies, we discover new key parameter sensitivities. For instance, changes in runoff and evapotranspiration are sensitive to albedo, while changes in snow water equivalent are sensitive to canopy fraction and Leaf Area Index (LAI) in the VIC model. It is critical for improved modeling to narrow uncertainty in these parameters through improved observations and field studies. This is important because LAI and albedo are anticipated to change under future climate and narrowing uncertainty is paramount to advance our application of models such as VIC for water resource management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Picard, Richard Roy; Bhat, Kabekode Ghanasham
2017-07-18
We examine sensitivity analysis and uncertainty quantification for molecular dynamics simulation. Extreme (large or small) output values for the LAMMPS code often occur at the boundaries of input regions, and uncertainties in those boundary values are overlooked by common SA methods. Similarly, input values for which code outputs are consistent with calibration data can also occur near boundaries. Upon applying approaches in the literature for imprecise probabilities (IPs), much more realistic results are obtained than for the complacent application of standard SA and code calibration.
Dai, Heng; Ye, Ming; Walker, Anthony P.; ...
2017-03-28
A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Ye, Ming; Walker, Anthony P.
A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less
NASA Astrophysics Data System (ADS)
Aleksankina, Ksenia; Heal, Mathew R.; Dore, Anthony J.; Van Oijen, Marcel; Reis, Stefan
2018-04-01
Atmospheric chemistry transport models (ACTMs) are widely used to underpin policy decisions associated with the impact of potential changes in emissions on future pollutant concentrations and deposition. It is therefore essential to have a quantitative understanding of the uncertainty in model output arising from uncertainties in the input pollutant emissions. ACTMs incorporate complex and non-linear descriptions of chemical and physical processes which means that interactions and non-linearities in input-output relationships may not be revealed through the local one-at-a-time sensitivity analysis typically used. The aim of this work is to demonstrate a global sensitivity and uncertainty analysis approach for an ACTM, using as an example the FRAME model, which is extensively employed in the UK to generate source-receptor matrices for the UK Integrated Assessment Model and to estimate critical load exceedances. An optimised Latin hypercube sampling design was used to construct model runs within ±40 % variation range for the UK emissions of SO2, NOx, and NH3, from which regression coefficients for each input-output combination and each model grid ( > 10 000 across the UK) were calculated. Surface concentrations of SO2, NOx, and NH3 (and of deposition of S and N) were found to be predominantly sensitive to the emissions of the respective pollutant, while sensitivities of secondary species such as HNO3 and particulate SO42-, NO3-, and NH4+ to pollutant emissions were more complex and geographically variable. The uncertainties in model output variables were propagated from the uncertainty ranges reported by the UK National Atmospheric Emissions Inventory for the emissions of SO2, NOx, and NH3 (±4, ±10, and ±20 % respectively). The uncertainties in the surface concentrations of NH3 and NOx and the depositions of NHx and NOy were dominated by the uncertainties in emissions of NH3, and NOx respectively, whilst concentrations of SO2 and deposition of SOy were affected by the uncertainties in both SO2 and NH3 emissions. Likewise, the relative uncertainties in the modelled surface concentrations of each of the secondary pollutant variables (NH4+, NO3-, SO42-, and HNO3) were due to uncertainties in at least two input variables. In all cases the spatial distribution of relative uncertainty was found to be geographically heterogeneous. The global methods used here can be applied to conduct sensitivity and uncertainty analyses of other ACTMs.
UNCERTAINTY ANALYSIS IN WATER QUALITY MODELING USING QUAL2E
A strategy for incorporating uncertainty analysis techniques (sensitivity analysis, first order error analysis, and Monte Carlo simulation) into the mathematical water quality model QUAL2E is described. The model, named QUAL2E-UNCAS, automatically selects the input variables or p...
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar
2016-01-01
This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.
Rahman, A.; Tsai, F.T.-C.; White, C.D.; Willson, C.S.
2008-01-01
This study investigates capture zone uncertainty that relates to the coupled semivariogram uncertainty of hydrogeological and geophysical data. Semivariogram uncertainty is represented by the uncertainty in structural parameters (range, sill, and nugget). We used the beta distribution function to derive the prior distributions of structural parameters. The probability distributions of structural parameters were further updated through the Bayesian approach with the Gaussian likelihood functions. Cokriging of noncollocated pumping test data and electrical resistivity data was conducted to better estimate hydraulic conductivity through autosemivariograms and pseudo-cross-semivariogram. Sensitivities of capture zone variability with respect to the spatial variability of hydraulic conductivity, porosity and aquifer thickness were analyzed using ANOVA. The proposed methodology was applied to the analysis of capture zone uncertainty at the Chicot aquifer in Southwestern Louisiana, where a regional groundwater flow model was developed. MODFLOW-MODPATH was adopted to delineate the capture zone. The ANOVA results showed that both capture zone area and compactness were sensitive to hydraulic conductivity variation. We concluded that the capture zone uncertainty due to the semivariogram uncertainty is much higher than that due to the kriging uncertainty for given semivariograms. In other words, the sole use of conditional variances of kriging may greatly underestimate the flow response uncertainty. Semivariogram uncertainty should also be taken into account in the uncertainty analysis. ?? 2008 ASCE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C. S.; Zhang, Hongbin
Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less
Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes
NASA Astrophysics Data System (ADS)
Guerrero, José-Luis; Pernica, Patricia; Wheater, Howard; Mackay, Murray; Spence, Chris
2017-12-01
Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere - heat-exchange fluxes - is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM), a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd). A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), was used to perform sensitivity analysis (SA) and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE) was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue - different parameter-value combinations yielding equivalent results - the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.
SCIENTIFIC UNCERTAINTIES IN ATMOSPHERIC MERCURY MODELS II: SENSITIVITY ANALYSIS IN THE CONUS DOMAIN
In this study, we present the response of model results to different scientific treatments in an effort to quantify the uncertainties caused by the incomplete understanding of mercury science and by model assumptions in atmospheric mercury models. Two sets of sensitivity simulati...
NASA Astrophysics Data System (ADS)
Paul, M.; Negahban-Azar, M.
2017-12-01
The hydrologic models usually need to be calibrated against observed streamflow at the outlet of a particular drainage area through a careful model calibration. However, a large number of parameters are required to fit in the model due to their unavailability of the field measurement. Therefore, it is difficult to calibrate the model for a large number of potential uncertain model parameters. This even becomes more challenging if the model is for a large watershed with multiple land uses and various geophysical characteristics. Sensitivity analysis (SA) can be used as a tool to identify most sensitive model parameters which affect the calibrated model performance. There are many different calibration and uncertainty analysis algorithms which can be performed with different objective functions. By incorporating sensitive parameters in streamflow simulation, effects of the suitable algorithm in improving model performance can be demonstrated by the Soil and Water Assessment Tool (SWAT) modeling. In this study, the SWAT was applied in the San Joaquin Watershed in California covering 19704 km2 to calibrate the daily streamflow. Recently, sever water stress escalating due to intensified climate variability, prolonged drought and depleting groundwater for agricultural irrigation in this watershed. Therefore it is important to perform a proper uncertainty analysis given the uncertainties inherent in hydrologic modeling to predict the spatial and temporal variation of the hydrologic process to evaluate the impacts of different hydrologic variables. The purpose of this study was to evaluate the sensitivity and uncertainty of the calibrated parameters for predicting streamflow. To evaluate the sensitivity of the calibrated parameters three different optimization algorithms (Sequential Uncertainty Fitting- SUFI-2, Generalized Likelihood Uncertainty Estimation- GLUE and Parameter Solution- ParaSol) were used with four different objective functions (coefficient of determination- r2, Nash-Sutcliffe efficiency- NSE, percent bias- PBIAS, and Kling-Gupta efficiency- KGE). The preliminary results showed that using the SUFI-2 algorithm with the objective function NSE and KGE has improved significantly the calibration (e.g. R2 and NSE is found 0.52 and 0.47 respectively for daily streamflow calibration).
Global sensitivity analysis of multiscale properties of porous materials
NASA Astrophysics Data System (ADS)
Um, Kimoon; Zhang, Xuan; Katsoulakis, Markos; Plechac, Petr; Tartakovsky, Daniel M.
2018-02-01
Ubiquitous uncertainty about pore geometry inevitably undermines the veracity of pore- and multi-scale simulations of transport phenomena in porous media. It raises two fundamental issues: sensitivity of effective material properties to pore-scale parameters and statistical parameterization of Darcy-scale models that accounts for pore-scale uncertainty. Homogenization-based maps of pore-scale parameters onto their Darcy-scale counterparts facilitate both sensitivity analysis (SA) and uncertainty quantification. We treat uncertain geometric characteristics of a hierarchical porous medium as random variables to conduct global SA and to derive probabilistic descriptors of effective diffusion coefficients and effective sorption rate. Our analysis is formulated in terms of solute transport diffusing through a fluid-filled pore space, while sorbing to the solid matrix. Yet it is sufficiently general to be applied to other multiscale porous media phenomena that are amenable to homogenization.
Flores-Alsina, Xavier; Rodriguez-Roda, Ignasi; Sin, Gürkan; Gernaey, Krist V
2009-01-01
The objective of this paper is to perform an uncertainty and sensitivity analysis of the predictions of the Benchmark Simulation Model (BSM) No. 1, when comparing four activated sludge control strategies. The Monte Carlo simulation technique is used to evaluate the uncertainty in the BSM1 predictions, considering the ASM1 bio-kinetic parameters and influent fractions as input uncertainties while the Effluent Quality Index (EQI) and the Operating Cost Index (OCI) are focused on as model outputs. The resulting Monte Carlo simulations are presented using descriptive statistics indicating the degree of uncertainty in the predicted EQI and OCI. Next, the Standard Regression Coefficients (SRC) method is used for sensitivity analysis to identify which input parameters influence the uncertainty in the EQI predictions the most. The results show that control strategies including an ammonium (S(NH)) controller reduce uncertainty in both overall pollution removal and effluent total Kjeldahl nitrogen. Also, control strategies with an external carbon source reduce the effluent nitrate (S(NO)) uncertainty increasing both their economical cost and variability as a trade-off. Finally, the maximum specific autotrophic growth rate (micro(A)) causes most of the variance in the effluent for all the evaluated control strategies. The influence of denitrification related parameters, e.g. eta(g) (anoxic growth rate correction factor) and eta(h) (anoxic hydrolysis rate correction factor), becomes less important when a S(NO) controller manipulating an external carbon source addition is implemented.
Rainfall or parameter uncertainty? The power of sensitivity analysis on grouped factors
NASA Astrophysics Data System (ADS)
Nossent, Jiri; Pereira, Fernando; Bauwens, Willy
2017-04-01
Hydrological models are typically used to study and represent (a part of) the hydrological cycle. In general, the output of these models mostly depends on their input rainfall and parameter values. Both model parameters and input precipitation however, are characterized by uncertainties and, therefore, lead to uncertainty on the model output. Sensitivity analysis (SA) allows to assess and compare the importance of the different factors for this output uncertainty. Hereto, the rainfall uncertainty can be incorporated in the SA by representing it as a probabilistic multiplier. Such multiplier can be defined for the entire time series, or several of these factors can be determined for every recorded rainfall pulse or for hydrological independent storm events. As a consequence, the number of parameters included in the SA related to the rainfall uncertainty can be (much) lower or (much) higher than the number of model parameters. Although such analyses can yield interesting results, it remains challenging to determine which type of uncertainty will affect the model output most due to the different weight both types will have within the SA. In this study, we apply the variance based Sobol' sensitivity analysis method to two different hydrological simulators (NAM and HyMod) for four diverse watersheds. Besides the different number of model parameters (NAM: 11 parameters; HyMod: 5 parameters), the setup of our sensitivity and uncertainty analysis-combination is also varied by defining a variety of scenarios including diverse numbers of rainfall multipliers. To overcome the issue of the different number of factors and, thus, the different weights of the two types of uncertainty, we build on one of the advantageous properties of the Sobol' SA, i.e. treating grouped parameters as a single parameter. The latter results in a setup with a single factor for each uncertainty type and allows for a straightforward comparison of their importance. In general, the results show a clear influence of the weights in the different SA scenarios. However, working with grouped factors resolves this issue and leads to clear importance results.
NASA Astrophysics Data System (ADS)
Ruiz, Rafael O.; Meruane, Viviana
2017-06-01
The goal of this work is to describe a framework to propagate uncertainties in piezoelectric energy harvesters (PEHs). These uncertainties are related to the incomplete knowledge of the model parameters. The framework presented could be employed to conduct prior robust stochastic predictions. The prior analysis assumes a known probability density function for the uncertain variables and propagates the uncertainties to the output voltage. The framework is particularized to evaluate the behavior of the frequency response functions (FRFs) in PEHs, while its implementation is illustrated by the use of different unimorph and bimorph PEHs subjected to different scenarios: free of uncertainties, common uncertainties, and uncertainties as a product of imperfect clamping. The common variability associated with the PEH parameters are tabulated and reported. A global sensitivity analysis is conducted to identify the Sobol indices. Results indicate that the elastic modulus, density, and thickness of the piezoelectric layer are the most relevant parameters of the output variability. The importance of including the model parameter uncertainties in the estimation of the FRFs is revealed. In this sense, the present framework constitutes a powerful tool in the robust design and prediction of PEH performance.
Uncertainty Analysis of Decomposing Polyurethane Foam
NASA Technical Reports Server (NTRS)
Hobbs, Michael L.; Romero, Vicente J.
2000-01-01
Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. Related polyurethane binders such as Estane are used in many energetic materials of interest to the JANNAF community. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state decomposition front velocity calculated as the derivative of the decomposition front location versus time. An analytical mean value sensitivity/uncertainty (MV) analysis was used to determine the standard deviation by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation was essentially determined from a second derivative that was extremely sensitive to numerical noise. To minimize the numerical noise, 50-micrometer element dimensions and approximately 1-msec time steps were required to obtain stable uncertainty results. As an alternative method to determine the uncertainty and sensitivity in the decomposition front velocity, surrogate response surfaces were generated for use with a constrained Latin Hypercube Sampling (LHS) technique. Two surrogate response surfaces were investigated: 1) a linear surrogate response surface (LIN) and 2) a quadratic response surface (QUAD). The LHS techniques do not require derivatives of the response variable and are subsequently relatively insensitive to numerical noise. To compare the LIN and QUAD methods to the MV method, a direct LHS analysis (DLHS) was performed using the full grid and timestep resolved finite element model. The surrogate response models (LIN and QUAD) are shown to give acceptable values of the mean and standard deviation when compared to the fully converged DLHS model.
2007-01-01
multi-disciplinary optimization with uncertainty. Robust optimization and sensitivity analysis is usually used when an optimization model has...formulation is introduced in Section 2.3. We briefly discuss several definitions used in the sensitivity analysis in Section 2.4. Following in...2.5. 2.4 SENSITIVITY ANALYSIS In this section, we discuss several definitions used in Chapter 5 for Multi-Objective Sensitivity Analysis . Inner
This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...
NASA Technical Reports Server (NTRS)
Gaebler, John A.; Tolson, Robert H.
2010-01-01
In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.
Uncertainty Quantification Techniques of SCALE/TSUNAMI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Mueller, Don
2011-01-01
The Standardized Computer Analysis for Licensing Evaluation (SCALE) code system developed at Oak Ridge National Laboratory (ORNL) includes Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI). The TSUNAMI code suite can quantify the predicted change in system responses, such as k{sub eff}, reactivity differences, or ratios of fluxes or reaction rates, due to changes in the energy-dependent, nuclide-reaction-specific cross-section data. Where uncertainties in the neutron cross-section data are available, the sensitivity of the system to the cross-section data can be applied to propagate the uncertainties in the cross-section data to an uncertainty in the system response. Uncertainty quantification ismore » useful for identifying potential sources of computational biases and highlighting parameters important to code validation. Traditional validation techniques often examine one or more average physical parameters to characterize a system and identify applicable benchmark experiments. However, with TSUNAMI correlation coefficients are developed by propagating the uncertainties in neutron cross-section data to uncertainties in the computed responses for experiments and safety applications through sensitivity coefficients. The bias in the experiments, as a function of their correlation coefficient with the intended application, is extrapolated to predict the bias and bias uncertainty in the application through trending analysis or generalized linear least squares techniques, often referred to as 'data adjustment.' Even with advanced tools to identify benchmark experiments, analysts occasionally find that the application models include some feature or material for which adequately similar benchmark experiments do not exist to support validation. For example, a criticality safety analyst may want to take credit for the presence of fission products in spent nuclear fuel. In such cases, analysts sometimes rely on 'expert judgment' to select an additional administrative margin to account for gap in the validation data or to conclude that the impact on the calculated bias and bias uncertainty is negligible. As a result of advances in computer programs and the evolution of cross-section covariance data, analysts can use the sensitivity and uncertainty analysis tools in the TSUNAMI codes to estimate the potential impact on the application-specific bias and bias uncertainty resulting from nuclides not represented in available benchmark experiments. This paper presents the application of methods described in a companion paper.« less
Sensitivity Analysis of the Integrated Medical Model for ISS Programs
NASA Technical Reports Server (NTRS)
Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.
2016-01-01
Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral part of the overall verification, validation, and credibility review of IMM v4.0.
Degeling, Koen; IJzerman, Maarten J; Koopman, Miriam; Koffijberg, Hendrik
2017-12-15
Parametric distributions based on individual patient data can be used to represent both stochastic and parameter uncertainty. Although general guidance is available on how parameter uncertainty should be accounted for in probabilistic sensitivity analysis, there is no comprehensive guidance on reflecting parameter uncertainty in the (correlated) parameters of distributions used to represent stochastic uncertainty in patient-level models. This study aims to provide this guidance by proposing appropriate methods and illustrating the impact of this uncertainty on modeling outcomes. Two approaches, 1) using non-parametric bootstrapping and 2) using multivariate Normal distributions, were applied in a simulation and case study. The approaches were compared based on point-estimates and distributions of time-to-event and health economic outcomes. To assess sample size impact on the uncertainty in these outcomes, sample size was varied in the simulation study and subgroup analyses were performed for the case-study. Accounting for parameter uncertainty in distributions that reflect stochastic uncertainty substantially increased the uncertainty surrounding health economic outcomes, illustrated by larger confidence ellipses surrounding the cost-effectiveness point-estimates and different cost-effectiveness acceptability curves. Although both approaches performed similar for larger sample sizes (i.e. n = 500), the second approach was more sensitive to extreme values for small sample sizes (i.e. n = 25), yielding infeasible modeling outcomes. Modelers should be aware that parameter uncertainty in distributions used to describe stochastic uncertainty needs to be reflected in probabilistic sensitivity analysis, as it could substantially impact the total amount of uncertainty surrounding health economic outcomes. If feasible, the bootstrap approach is recommended to account for this uncertainty.
Uncertainty Analysis of the Grazing Flow Impedance Tube
NASA Technical Reports Server (NTRS)
Brown, Martha C.; Jones, Michael G.; Watson, Willie R.
2012-01-01
This paper outlines a methodology to identify the measurement uncertainty of NASA Langley s Grazing Flow Impedance Tube (GFIT) over its operating range, and to identify the parameters that most significantly contribute to the acoustic impedance prediction. Two acoustic liners are used for this study. The first is a single-layer, perforate-over-honeycomb liner that is nonlinear with respect to sound pressure level. The second consists of a wire-mesh facesheet and a honeycomb core, and is linear with respect to sound pressure level. These liners allow for evaluation of the effects of measurement uncertainty on impedances educed with linear and nonlinear liners. In general, the measurement uncertainty is observed to be larger for the nonlinear liners, with the largest uncertainty occurring near anti-resonance. A sensitivity analysis of the aerodynamic parameters (Mach number, static temperature, and static pressure) used in the impedance eduction process is also conducted using a Monte-Carlo approach. This sensitivity analysis demonstrates that the impedance eduction process is virtually insensitive to each of these parameters.
NASA Astrophysics Data System (ADS)
Allard, Alexandre; Fischer, Nicolas
2018-06-01
Sensitivity analysis associated with the evaluation of measurement uncertainty is a very important tool for the metrologist, enabling them to provide an uncertainty budget and to gain a better understanding of the measurand and the underlying measurement process. Using the GUM uncertainty framework, the contribution of an input quantity to the variance of the output quantity is obtained through so-called ‘sensitivity coefficients’. In contrast, such coefficients are no longer computed in cases where a Monte-Carlo method is used. In such a case, supplement 1 to the GUM suggests varying the input quantities one at a time, which is not an efficient method and may provide incorrect contributions to the variance in cases where significant interactions arise. This paper proposes different methods for the elaboration of the uncertainty budget associated with a Monte Carlo method. An application to the mass calibration example described in supplement 1 to the GUM is performed with the corresponding R code for implementation. Finally, guidance is given for choosing a method, including suggestions for a future revision of supplement 1 to the GUM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sig Drellack, Lance Prothro
2007-12-01
The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result ofmore » the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions.« less
NASA Technical Reports Server (NTRS)
Liu, Tianshu; Bencic, T.; Sullivan, J. P.
1999-01-01
This article reviews new advances and applications of pressure sensitive paints in aerodynamic testing. Emphasis is placed on important technical aspects of pressure sensitive paint including instrumentation, data processing, and uncertainty analysis.
SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations
Perfetti, Christopher M.; Rearden, Bradley T.; Martin, William R.
2016-02-25
Sensitivity coefficients describe the fractional change in a system response that is induced by changes to system parameters and nuclear data. The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, including quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the developmentmore » of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE-KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using continuous-energy Monte Carlo methods. This work provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through the use of continuous-energy sensitivity methods and also compares several sensitivity methods in terms of computational efficiency and memory requirements.« less
NASA Astrophysics Data System (ADS)
Prestifilippo, Michele; Scollo, Simona; Tarantola, Stefano
2015-04-01
The uncertainty in volcanic ash forecasts may depend on our knowledge of the model input parameters and our capability to represent the dynamic of an incoming eruption. Forecasts help governments to reduce risks associated with volcanic eruptions and for this reason different kinds of analysis that help to understand the effect that each input parameter has on model outputs are necessary. We present an iterative approach based on the sequential combination of sensitivity analysis, parameter estimation procedure and Monte Carlo-based uncertainty analysis, applied to the lagrangian volcanic ash dispersal model PUFF. We modify the main input parameters as the total mass, the total grain-size distribution, the plume thickness, the shape of the eruption column, the sedimentation models and the diffusion coefficient, perform thousands of simulations and analyze the results. The study is carried out on two different Etna scenarios: the sub-plinian eruption of 22 July 1998 that formed an eruption column rising 12 km above sea level and lasted some minutes and the lava fountain eruption having features similar to the 2011-2013 events that produced eruption column high up to several kilometers above sea level and lasted some hours. Sensitivity analyses and uncertainty estimation results help us to address the measurements that volcanologists should perform during volcanic crisis to reduce the model uncertainty.
Micropollutants throughout an integrated urban drainage model: Sensitivity and uncertainty analysis
NASA Astrophysics Data System (ADS)
Mannina, Giorgio; Cosenza, Alida; Viviani, Gaspare
2017-11-01
The paper presents the sensitivity and uncertainty analysis of an integrated urban drainage model which includes micropollutants. Specifically, a bespoke integrated model developed in previous studies has been modified in order to include the micropollutant assessment (namely, sulfamethoxazole - SMX). The model takes into account also the interactions between the three components of the system: sewer system (SS), wastewater treatment plant (WWTP) and receiving water body (RWB). The analysis has been applied to an experimental catchment nearby Palermo (Italy): the Nocella catchment. Overall, five scenarios, each characterized by different uncertainty combinations of sub-systems (i.e., SS, WWTP and RWB), have been considered applying, for the sensitivity analysis, the Extended-FAST method in order to select the key factors affecting the RWB quality and to design a reliable/useful experimental campaign. Results have demonstrated that sensitivity analysis is a powerful tool for increasing operator confidence in the modelling results. The approach adopted here can be used for blocking some non-identifiable factors, thus wisely modifying the structure of the model and reducing the related uncertainty. The model factors related to the SS have been found to be the most relevant factors affecting the SMX modeling in the RWB when all model factors (scenario 1) or model factors of SS (scenarios 2 and 3) are varied. If the only factors related to the WWTP are changed (scenarios 4 and 5), the SMX concentration in the RWB is mainly influenced (till to 95% influence of the total variance for SSMX,max) by the aerobic sorption coefficient. A progressive uncertainty reduction from the upstream to downstream was found for the soluble fraction of SMX in the RWB.
Radomyski, Artur; Giubilato, Elisa; Ciffroy, Philippe; Critto, Andrea; Brochot, Céline; Marcomini, Antonio
2016-11-01
The study is focused on applying uncertainty and sensitivity analysis to support the application and evaluation of large exposure models where a significant number of parameters and complex exposure scenarios might be involved. The recently developed MERLIN-Expo exposure modelling tool was applied to probabilistically assess the ecological and human exposure to PCB 126 and 2,3,7,8-TCDD in the Venice lagoon (Italy). The 'Phytoplankton', 'Aquatic Invertebrate', 'Fish', 'Human intake' and PBPK models available in MERLIN-Expo library were integrated to create a specific food web to dynamically simulate bioaccumulation in various aquatic species and in the human body over individual lifetimes from 1932 until 1998. MERLIN-Expo is a high tier exposure modelling tool allowing propagation of uncertainty on the model predictions through Monte Carlo simulation. Uncertainty in model output can be further apportioned between parameters by applying built-in sensitivity analysis tools. In this study, uncertainty has been extensively addressed in the distribution functions to describe the data input and the effect on model results by applying sensitivity analysis techniques (screening Morris method, regression analysis, and variance-based method EFAST). In the exposure scenario developed for the Lagoon of Venice, the concentrations of 2,3,7,8-TCDD and PCB 126 in human blood turned out to be mainly influenced by a combination of parameters (half-lives of the chemicals, body weight variability, lipid fraction, food assimilation efficiency), physiological processes (uptake/elimination rates), environmental exposure concentrations (sediment, water, food) and eating behaviours (amount of food eaten). In conclusion, this case study demonstrated feasibility of MERLIN-Expo to be successfully employed in integrated, high tier exposure assessment. Copyright © 2016 Elsevier B.V. All rights reserved.
Sensitivity Analysis of Nuclide Importance to One-Group Neutron Cross Sections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sekimoto, Hiroshi; Nemoto, Atsushi; Yoshimura, Yoshikane
The importance of nuclides is useful when investigating nuclide characteristics in a given neutron spectrum. However, it is derived using one-group microscopic cross sections, which may contain large errors or uncertainties. The sensitivity coefficient shows the effect of these errors or uncertainties on the importance.The equations for calculating sensitivity coefficients of importance to one-group nuclear constants are derived using the perturbation method. Numerical values are also evaluated for some important cases for fast and thermal reactor systems.Many characteristics of the sensitivity coefficients are derived from the derived equations and numerical results. The matrix of sensitivity coefficients seems diagonally dominant. However,more » it is not always satisfied in a detailed structure. The detailed structure of the matrix and the characteristics of coefficients are given.By using the obtained sensitivity coefficients, some demonstration calculations have been performed. The effects of error and uncertainty of nuclear data and of the change of one-group cross-section input caused by fuel design changes through the neutron spectrum are investigated. These calculations show that the sensitivity coefficient is useful when evaluating error or uncertainty of nuclide importance caused by the cross-section data error or uncertainty and when checking effectiveness of fuel cell or core design change for improving neutron economy.« less
Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem
Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; ...
2015-01-01
In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.
Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.
In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanova, T.; Laville, C.; Dyrda, J.
2012-07-01
The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplificationsmore » impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)« less
Assessing climate change and socio-economic uncertainties in long term management of water resources
NASA Astrophysics Data System (ADS)
Jahanshahi, Golnaz; Dawson, Richard; Walsh, Claire; Birkinshaw, Stephen; Glenis, Vassilis
2015-04-01
Long term management of water resources is challenging for decision makers given the range of uncertainties that exist. Such uncertainties are a function of long term drivers of change, such as climate, environmental loadings, demography, land use and other socio economic drivers. Impacts of climate change on frequency of extreme events such as drought make it a serious threat to water resources and water security. The release of probabilistic climate information, such as the UKCP09 scenarios, provides improved understanding of some uncertainties in climate models. This has motivated a more rigorous approach to dealing with other uncertainties in order to understand the sensitivity of investment decisions to future uncertainty and identify adaptation options that are as far as possible robust. We have developed and coupled a system of models that includes a weather generator, simulations of catchment hydrology, demand for water and the water resource system. This integrated model has been applied in the Thames catchment which supplies the city of London, UK. This region is one of the driest in the UK and hence sensitive to water availability. In addition, it is one of the fastest growing parts of the UK and plays an important economic role. Key uncertainties in long term water resources in the Thames catchment, many of which result from earth system processes, are identified and quantified. The implications of these uncertainties are explored using a combination of uncertainty analysis and sensitivity testing. The analysis shows considerable uncertainty in future rainfall, river flow and consequently water resource. For example, results indicate that by the 2050s, low flow (Q95) in the Thames catchment will range from -44 to +9% compared with the control scenario (1970s). Consequently, by the 2050s the average number of drought days are expected to increase 4-6 times relative to the 1970s. Uncertainties associated with urban growth increase these risks further. Adaptation measures, such as new reservoirs can manage these risks to a certain extent, but our sensitivity testing demonstrates that they are less robust to future uncertainties than measures taken to reduce water demand. Keywords: Climate change, Uncertainty, Decision making, Drought, Risk, Water resources management.
Uncertainty and sensitivity analysis for photovoltaic system modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk
2013-12-01
We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, directmore » and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.« less
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; ...
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.
Sensitivity and uncertainty analysis for the annual phosphorus loss estimator model
USDA-ARS?s Scientific Manuscript database
Models are often used to predict phosphorus (P) loss from agricultural fields. While it is commonly recognized that there are inherent uncertainties with model predictions, limited studies have addressed model prediction uncertainty. In this study we assess the effect of model input error on predict...
A probabilistic model (SHEDS-Wood) was developed to examine children's exposure and dose to chromated copper arsenate (CCA)-treated wood, as described in Part 1 of this two part paper. This Part 2 paper discusses sensitivity and uncertainty analyses conducted to assess the key m...
We present a multi-faceted sensitivity analysis of a spatially explicit, individual-based model (IBM) (HexSim) of a threatened species, the Northern Spotted Owl (Strix occidentalis caurina) on a national forest in Washington, USA. Few sensitivity analyses have been conducted on ...
Sensitivity and uncertainty analysis for Abreu & Johnson numerical vapor intrusion model.
Ma, Jie; Yan, Guangxu; Li, Haiyan; Guo, Shaohui
2016-03-05
This study conducted one-at-a-time (OAT) sensitivity and uncertainty analysis for a numerical vapor intrusion model for nine input parameters, including soil porosity, soil moisture, soil air permeability, aerobic biodegradation rate, building depressurization, crack width, floor thickness, building volume, and indoor air exchange rate. Simulations were performed for three soil types (clay, silt, and sand), two source depths (3 and 8m), and two source concentrations (1 and 400 g/m(3)). Model sensitivity and uncertainty for shallow and high-concentration vapor sources (3m and 400 g/m(3)) are much smaller than for deep and low-concentration sources (8m and 1g/m(3)). For high-concentration sources, soil air permeability, indoor air exchange rate, and building depressurization (for high permeable soil like sand) are key contributors to model output uncertainty. For low-concentration sources, soil porosity, soil moisture, aerobic biodegradation rate and soil gas permeability are key contributors to model output uncertainty. Another important finding is that impacts of aerobic biodegradation on vapor intrusion potential of petroleum hydrocarbons are negligible when vapor source concentration is high, because of insufficient oxygen supply that limits aerobic biodegradation activities. Copyright © 2015 Elsevier B.V. All rights reserved.
Metrics for evaluating performance and uncertainty of Bayesian network models
Bruce G. Marcot
2012-01-01
This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strydom, Gerhard; Bostelmann, F.
The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of HTGR design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The predictive capability of coupled neutronics/thermal-hydraulics and depletion simulations for reactor design and safety analysis can be assessed with sensitivity analysis (SA) and uncertainty analysis (UA) methods. Uncertainty originates from errors in physical data, manufacturing uncertainties, modelling and computational algorithms. (The interested reader is referred to the large body of published SA and UA literature for a more complete overview of the various types of uncertainties, methodologies and results obtained).more » SA is helpful for ranking the various sources of uncertainty and error in the results of core analyses. SA and UA are required to address cost, safety, and licensing needs and should be applied to all aspects of reactor multi-physics simulation. SA and UA can guide experimental, modelling, and algorithm research and development. Current SA and UA rely either on derivative-based methods such as stochastic sampling methods or on generalized perturbation theory to obtain sensitivity coefficients. Neither approach addresses all needs. In order to benefit from recent advances in modelling and simulation and the availability of new covariance data (nuclear data uncertainties) extensive sensitivity and uncertainty studies are needed for quantification of the impact of different sources of uncertainties on the design and safety parameters of HTGRs. Only a parallel effort in advanced simulation and in nuclear data improvement will be able to provide designers with more robust and well validated calculation tools to meet design target accuracies. In February 2009, the Technical Working Group on Gas-Cooled Reactors (TWG-GCR) of the International Atomic Energy Agency (IAEA) recommended that the proposed Coordinated Research Program (CRP) on the HTGR Uncertainty Analysis in Modelling (UAM) be implemented. This CRP is a continuation of the previous IAEA and Organization for Economic Co-operation and Development (OECD)/Nuclear Energy Agency (NEA) international activities on Verification and Validation (V&V) of available analytical capabilities for HTGR simulation for design and safety evaluations. Within the framework of these activities different numerical and experimental benchmark problems were performed and insight was gained about specific physics phenomena and the adequacy of analysis methods.« less
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2014-12-01
Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.
Application of the JENDL-4.0 nuclear data set for uncertainty analysis of the prototype FBR Monju
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamagno, P.; Van Rooijen, W. F. G.; Takeda, T.
2012-07-01
This paper deals with uncertainty analysis of the Monju reactor using JENDL-4.0 and the ERANOS code 1. In 2010 the Japan Atomic Energy Agency - JAEA - released the JENDL-4.0 nuclear data set. This new evaluation contains improved values of cross-sections and emphasizes accurate covariance matrices. Also in 2010, JAEA restarted the sodium-cooled fast reactor prototype Monju after about 15 years of shutdown. The long shutdown time resulted in a build-up of {sup 241}Am by natural decay from the initially loaded Pu. As well as improved covariance matrices, JENDL-4.0 is announced to contain improved data for minor actinides 2. Themore » choice of Monju reactor as an application of the new evaluation seems then even more relevant. The uncertainty analysis requires the determination of sensitivity coefficients. The well-established ERANOS code was chosen because of its integrated modules that allow users to perform sensitivity and uncertainty analysis. A JENDL-4.0 cross-sections library is not available for ERANOS. Therefor a cross-sections library had to be made from the original ENDF files for the ECCO cell code (part of ERANOS). For confirmation of the newly made library, calculations of a benchmark core were performed. These calculations used the MZA and MZB benchmarks and showed consistent results with other libraries. Calculations for the Monju reactor were performed using hexagonal 3D geometry and PN transport theory. However, the ERANOS sensitivity modules cannot use the resulting fluxes, as these modules require finite differences based fluxes, obtained from RZ SN-transport or 3D diffusion calculations. The corresponding geometrical models have been made and the results verified with Monju restart experimental data 4. Uncertainty analysis was performed using the RZ model. JENDL-4.0 uncertainty analysis showed a significant reduction of the uncertainty related to the fission cross-section of Pu along with an increase of the uncertainty related to the capture cross-section of {sup 238}U compared with the previous JENDL-3.3 version. Covariance data recently added in JENDL-4.0 for {sup 241}Am appears to have a non-negligible contribution. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunke, Elizabeth Clare; Urrego Blanco, Jorge Rolando; Urban, Nathan Mark
Coupled climate models have a large number of input parameters that can affect output uncertainty. We conducted a sensitivity analysis of sea ice proper:es and Arc:c related climate variables to 5 parameters in the HiLAT climate model: air-ocean turbulent exchange parameter (C), conversion of water vapor to clouds (cldfrc_rhminl) and of ice crystals to snow (micro_mg_dcs), snow thermal conduc:vity (ksno), and maximum snow grain size (rsnw_mlt). We used an elementary effect (EE) approach to rank their importance for output uncertainty. EE is an extension of one-at-a-time sensitivity analyses, but it is more efficient in sampling multi-dimensional parameter spaces. We lookedmore » for emerging relationships among climate variables across the model ensemble, and used causal discovery algorithms to establish potential pathways for those relationships.« less
Liu, Jianhua; Jiang, Hongbo; Zhang, Hao; Guo, Chun; Wang, Lei; Yang, Jing; Nie, Shaofa
2017-06-27
In the summer of 2014, an influenza A(H3N2) outbreak occurred in Yichang city, Hubei province, China. A retrospective study was conducted to collect and interpret hospital and epidemiological data on it using social network analysis and global sensitivity and uncertainty analyses. Results for degree (χ2=17.6619, P<0.0001) and betweenness(χ2=21.4186, P<0.0001) centrality suggested that the selection of sampling objects were different between traditional epidemiological methods and newer statistical approaches. Clique and network diagrams demonstrated that the outbreak actually consisted of two independent transmission networks. Sensitivity analysis showed that the contact coefficient (k) was the most important factor in the dynamic model. Using uncertainty analysis, we were able to better understand the properties and variations over space and time on the outbreak. We concluded that use of newer approaches were significantly more efficient for managing and controlling infectious diseases outbreaks, as well as saving time and public health resources, and could be widely applied on similar local outbreaks.
Sensitivity-Uncertainty Based Nuclear Criticality Safety Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
2016-09-20
These are slides from a seminar given to the University of Mexico Nuclear Engineering Department. Whisper is a statistical analysis package developed to support nuclear criticality safety validation. It uses the sensitivity profile data for an application as computed by MCNP6 along with covariance files for the nuclear data to determine a baseline upper-subcritical-limit for the application. Whisper and its associated benchmark files are developed and maintained as part of MCNP6, and will be distributed with all future releases of MCNP6. Although sensitivity-uncertainty methods for NCS validation have been under development for 20 years, continuous-energy Monte Carlo codes such asmore » MCNP could not determine the required adjoint-weighted tallies for sensitivity profiles. The recent introduction of the iterated fission probability method into MCNP led to the rapid development of sensitivity analysis capabilities for MCNP6 and the development of Whisper. Sensitivity-uncertainty based methods represent the future for NCS validation – making full use of today’s computer power to codify past approaches based largely on expert judgment. Validation results are defensible, auditable, and repeatable as needed with different assumptions and process models. The new methods can supplement, support, and extend traditional validation approaches.« less
Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling
Pastore, Giovanni; Swiler, L. P.; Hales, Jason D.; ...
2014-10-12
The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code and a recently implemented physics-based model for the coupled fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information from the open literature. The study leads to an initial quantitative assessment of the uncertaintymore » in fission gas behavior modeling with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
HELTON,JON CRAIG; BEAN,J.E.; ECONOMY,K.
2000-05-19
Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment for the Waste Isolation Pilot Plant are presented for two-phase flow the vicinity of the repository under undisturbed conditions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformation are used to investigate brine inflow, gas generation repository pressure, brine saturation and brine and gas outflow. Of the variables under study, repository pressure is potentially the most important due to its influence on spallings and direct brine releases, with the uncertainty in its value being dominated by the extent to whichmore » the microbial degradation of cellulose takes place, the rate at which the corrosion of steel takes place, and the amount of brine that drains from the surrounding disturbed rock zone into the repository.« less
Bruce G. Marcot; Peter H. Singleton; Nathan H. Schumaker
2015-01-01
Sensitivity analysisâdetermination of how prediction variables affect response variablesâof individual-based models (IBMs) are few but important to the interpretation of model output. We present sensitivity analysis of a spatially explicit IBM (HexSim) of a threatened species, the Northern Spotted Owl (NSO; Strix occidentalis caurina) in Washington...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
NASA Astrophysics Data System (ADS)
Avital, Matan; Kamai, Ronnie; Davis, Michael; Dor, Ory
2018-02-01
We present a full probabilistic seismic hazard analysis (PSHA) sensitivity analysis for two sites in southern Israel - one in the near field of a major fault system and one farther away. The PSHA analysis is conducted for alternative source representations, using alternative model parameters for the main seismic sources, such as slip rate and Mmax, among others. The analysis also considers the effect of the ground motion prediction equation (GMPE) on the hazard results. In this way, the two types of epistemic uncertainty - modelling uncertainty and parametric uncertainty - are treated and addressed. We quantify the uncertainty propagation by testing its influence on the final calculated hazard, such that the controlling knowledge gaps are identified and can be treated in future studies. We find that current practice in Israel, as represented by the current version of the building code, grossly underestimates the hazard, by approximately 40 % in short return periods (e.g. 10 % in 50 years) and by as much as 150 % in long return periods (e.g. 10E-5). The analysis shows that this underestimation is most probably due to a combination of factors, including source definitions as well as the GMPE used for analysis.
NASA Astrophysics Data System (ADS)
Park, Jihoon; Yang, Guang; Satija, Addy; Scheidt, Céline; Caers, Jef
2016-12-01
Sensitivity analysis plays an important role in geoscientific computer experiments, whether for forecasting, data assimilation or model calibration. In this paper we focus on an extension of a method of regionalized sensitivity analysis (RSA) to applications typical in the Earth Sciences. Such applications involve the building of large complex spatial models, the application of computationally extensive forward modeling codes and the integration of heterogeneous sources of model uncertainty. The aim of this paper is to be practical: 1) provide a Matlab code, 2) provide novel visualization methods to aid users in getting a better understanding in the sensitivity 3) provide a method based on kernel principal component analysis (KPCA) and self-organizing maps (SOM) to account for spatial uncertainty typical in Earth Science applications and 4) provide an illustration on a real field case where the above mentioned complexities present themselves. We present methods that extend the original RSA method in several ways. First we present the calculation of conditional effects, defined as the sensitivity of a parameter given a level of another parameters. Second, we show how this conditional effect can be used to choose nominal values or ranges to fix insensitive parameters aiming to minimally affect uncertainty in the response. Third, we develop a method based on KPCA and SOM to assign a rank to spatial models in order to calculate the sensitivity on spatial variability in the models. A large oil/gas reservoir case is used as illustration of these ideas.
NASA Astrophysics Data System (ADS)
Ren, Luchuan
2015-04-01
A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters Luchuan Ren, Jianwei Tian, Mingli Hong Institute of Disaster Prevention, Sanhe, Heibei Province, 065201, P.R. China It is obvious that the uncertainties of the maximum tsunami wave heights in offshore area are partly from uncertainties of the potential seismic tsunami source parameters. A global sensitivity analysis method on the maximum tsunami wave heights to the potential seismic source parameters is put forward in this paper. The tsunami wave heights are calculated by COMCOT ( the Cornell Multi-grid Coupled Tsunami Model), on the assumption that an earthquake with magnitude MW8.0 occurred at the northern fault segment along the Manila Trench and triggered a tsunami in the South China Sea. We select the simulated results of maximum tsunami wave heights at specific sites in offshore area to verify the validity of the method proposed in this paper. For ranking importance order of the uncertainties of potential seismic source parameters (the earthquake's magnitude, the focal depth, the strike angle, dip angle and slip angle etc..) in generating uncertainties of the maximum tsunami wave heights, we chose Morris method to analyze the sensitivity of the maximum tsunami wave heights to the aforementioned parameters, and give several qualitative descriptions of nonlinear or linear effects of them on the maximum tsunami wave heights. We quantitatively analyze the sensitivity of the maximum tsunami wave heights to these parameters and the interaction effects among these parameters on the maximum tsunami wave heights by means of the extended FAST method afterward. The results shows that the maximum tsunami wave heights are very sensitive to the earthquake magnitude, followed successively by the epicenter location, the strike angle and dip angle, the interactions effect between the sensitive parameters are very obvious at specific site in offshore area, and there exist differences in importance order in generating uncertainties of the maximum tsunami wave heights for same group parameters at different specific sites in offshore area. These results are helpful to deeply understand the relationship between the tsunami wave heights and the seismic tsunami source parameters. Keywords: Global sensitivity analysis; Tsunami wave height; Potential seismic tsunami source parameter; Morris method; Extended FAST method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; Mayes, Melanie; Parker, Jack C
2010-01-01
We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamp, F.; Brueningk, S.C.; Wilkens, J.J.
Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g.more » RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of uncertainties. Supported by DFG grant WI 3745/1-1 and DFG cluster of excellence: Munich-Centre for Advanced Photonics.« less
Vorburger, Robert S; Habeck, Christian G; Narkhede, Atul; Guzman, Vanessa A; Manly, Jennifer J; Brickman, Adam M
2016-01-01
Diffusion tensor imaging suffers from an intrinsic low signal-to-noise ratio. Bootstrap algorithms have been introduced to provide a non-parametric method to estimate the uncertainty of the measured diffusion parameters. To quantify the variability of the principal diffusion direction, bootstrap-derived metrics such as the cone of uncertainty have been proposed. However, bootstrap-derived metrics are not independent of the underlying diffusion profile. A higher mean diffusivity causes a smaller signal-to-noise ratio and, thus, increases the measurement uncertainty. Moreover, the goodness of the tensor model, which relies strongly on the complexity of the underlying diffusion profile, influences bootstrap-derived metrics as well. The presented simulations clearly depict the cone of uncertainty as a function of the underlying diffusion profile. Since the relationship of the cone of uncertainty and common diffusion parameters, such as the mean diffusivity and the fractional anisotropy, is not linear, the cone of uncertainty has a different sensitivity. In vivo analysis of the fornix reveals the cone of uncertainty to be a predictor of memory function among older adults. No significant correlation occurs with the common diffusion parameters. The present work not only demonstrates the cone of uncertainty as a function of the actual diffusion profile, but also discloses the cone of uncertainty as a sensitive predictor of memory function. Future studies should incorporate bootstrap-derived metrics to provide more comprehensive analysis.
Global Sensitivity Analysis for Process Identification under Model Uncertainty
NASA Astrophysics Data System (ADS)
Ye, M.; Dai, H.; Walker, A. P.; Shi, L.; Yang, J.
2015-12-01
The environmental system consists of various physical, chemical, and biological processes, and environmental models are always built to simulate these processes and their interactions. For model building, improvement, and validation, it is necessary to identify important processes so that limited resources can be used to better characterize the processes. While global sensitivity analysis has been widely used to identify important processes, the process identification is always based on deterministic process conceptualization that uses a single model for representing a process. However, environmental systems are complex, and it happens often that a single process may be simulated by multiple alternative models. Ignoring the model uncertainty in process identification may lead to biased identification in that identified important processes may not be so in the real world. This study addresses this problem by developing a new method of global sensitivity analysis for process identification. The new method is based on the concept of Sobol sensitivity analysis and model averaging. Similar to the Sobol sensitivity analysis to identify important parameters, our new method evaluates variance change when a process is fixed at its different conceptualizations. The variance considers both parametric and model uncertainty using the method of model averaging. The method is demonstrated using a synthetic study of groundwater modeling that considers recharge process and parameterization process. Each process has two alternative models. Important processes of groundwater flow and transport are evaluated using our new method. The method is mathematically general, and can be applied to a wide range of environmental problems.
Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities
NASA Astrophysics Data System (ADS)
Esposito, Gaetano
Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and identifying sources of uncertainty affecting relevant reaction pathways are usually addressed by resorting to Global Sensitivity Analysis (GSA) techniques. In particular, the most sensitive reactions controlling combustion phenomena are first identified using the Morris Method and then analyzed under the Random Sampling -- High Dimensional Model Representation (RS-HDMR) framework. The HDMR decomposition shows that 10% of the variance seen in the extinction strain rate of non-premixed flames is due to second-order effects between parameters, whereas the maximum concentration of acetylene, a key soot precursor, is affected by mostly only first-order contributions. Moreover, the analysis of the global sensitivity indices demonstrates that improving the accuracy of the reaction rates including the vinyl radical, C2H3, can drastically reduce the uncertainty of predicting targeted flame properties. Finally, the back-propagation of the experimental uncertainty of the extinction strain rate to the parameter space is also performed. This exercise, achieved by recycling the numerical solutions of the RS-HDMR, shows that some regions of the parameter space have a high probability of reproducing the experimental value of the extinction strain rate between its own uncertainty bounds. Therefore this study demonstrates that the uncertainty analysis of bulk flame properties can effectively provide information on relevant chemical reactions.
SCALE 6.2 Continuous-Energy TSUNAMI-3D Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2015-01-01
The TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation) capabilities within the SCALE code system make use of sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different systems, quantifying computational biases, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved ease of use and fidelity and the desire to extend TSUNAMI analysis to advanced applications have motivated the development of a SCALE 6.2 module for calculating sensitivity coefficients using three-dimensional (3D) continuous-energy (CE) Montemore » Carlo methods: CE TSUNAMI-3D. This paper provides an overview of the theory, implementation, and capabilities of the CE TSUNAMI-3D sensitivity analysis methods. CE TSUNAMI contains two methods for calculating sensitivity coefficients in eigenvalue sensitivity applications: (1) the Iterated Fission Probability (IFP) method and (2) the Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance CHaracterization (CLUTCH) method. This work also presents the GEneralized Adjoint Response in Monte Carlo method (GEAR-MC), a first-of-its-kind approach for calculating adjoint-weighted, generalized response sensitivity coefficients—such as flux responses or reaction rate ratios—in CE Monte Carlo applications. The accuracy and efficiency of the CE TSUNAMI-3D eigenvalue sensitivity methods are assessed from a user perspective in a companion publication, and the accuracy and features of the CE TSUNAMI-3D GEAR-MC methods are detailed in this paper.« less
Global sensitivity analysis in stochastic simulators of uncertain reaction networks.
Navarro Jimenez, M; Le Maître, O P; Knio, O M
2016-12-28
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-23
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes thatmore » the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. Here, a sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.« less
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
NASA Astrophysics Data System (ADS)
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-01
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.
2007-01-01
To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.
NASA Astrophysics Data System (ADS)
Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.
2018-03-01
We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.
NASA Astrophysics Data System (ADS)
Noacco, V.; Wagener, T.; Pianosi, F.; Philp, T.
2017-12-01
Insurance companies provide insurance against a wide range of threats, such as natural catastrophes, nuclear incidents and terrorism. To quantify risk and support investment decisions, mathematical models are used, for example to set the premiums charged to clients that protect from financial loss, should deleterious events occur. While these models are essential tools for adequately assessing the risk attached to an insurer's portfolio, their development is costly and their value for decision-making may be limited by an incomplete understanding of uncertainty and sensitivity. Aside from the business need to understand risk and uncertainty, the insurance sector also faces regulation which requires them to test their models in such a way that uncertainties are appropriately captured and that plans are in place to assess the risks and their mitigation. The building and testing of models constitutes a high cost for insurance companies, and it is a time intensive activity. This study uses an established global sensitivity analysis toolbox (SAFE) to more efficiently capture the uncertainties and sensitivities embedded in models used by a leading re/insurance firm, with structured approaches to validate these models and test the impact of assumptions on the model predictions. It is hoped that this in turn will lead to better-informed and more robust business decisions.
Beddows, Andrew V; Kitwiroon, Nutthida; Williams, Martin L; Beevers, Sean D
2017-06-06
Gaussian process emulation techniques have been used with the Community Multiscale Air Quality model, simulating the effects of input uncertainties on ozone and NO 2 output, to allow robust global sensitivity analysis (SA). A screening process ranked the effect of perturbations in 223 inputs, isolating the 30 most influential from emissions, boundary conditions (BCs), and reaction rates. Community Multiscale Air Quality (CMAQ) simulations of a July 2006 ozone pollution episode in the UK were made with input values for these variables plus ozone dry deposition velocity chosen according to a 576 point Latin hypercube design. Emulators trained on the output of these runs were used in variance-based SA of the model output to input uncertainties. Performing these analyses for every hour of a 21 day period spanning the episode and several days on either side allowed the results to be presented as a time series of sensitivity coefficients, showing how the influence of different input uncertainties changed during the episode. This is one of the most complex models to which these methods have been applied, and here, they reveal detailed spatiotemporal patterns of model sensitivities, with NO and isoprene emissions, NO 2 photolysis, ozone BCs, and deposition velocity being among the most influential input uncertainties.
Probabilistic sensitivity analysis for NICE technology assessment: not an optional extra.
Claxton, Karl; Sculpher, Mark; McCabe, Chris; Briggs, Andrew; Akehurst, Ron; Buxton, Martin; Brazier, John; O'Hagan, Tony
2005-04-01
Recently the National Institute for Clinical Excellence (NICE) updated its methods guidance for technology assessment. One aspect of the new guidance is to require the use of probabilistic sensitivity analysis with all cost-effectiveness models submitted to the Institute. The purpose of this paper is to place the NICE guidance on dealing with uncertainty into a broader context of the requirements for decision making; to explain the general approach that was taken in its development; and to address each of the issues which have been raised in the debate about the role of probabilistic sensitivity analysis in general. The most appropriate starting point for developing guidance is to establish what is required for decision making. On the basis of these requirements, the methods and framework of analysis which can best meet these needs can then be identified. It will be argued that the guidance on dealing with uncertainty and, in particular, the requirement for probabilistic sensitivity analysis, is justified by the requirements of the type of decisions that NICE is asked to make. Given this foundation, the main issues and criticisms raised during and after the consultation process are reviewed. Finally, some of the methodological challenges posed by the need fully to characterise decision uncertainty and to inform the research agenda will be identified and discussed. Copyright (c) 2005 John Wiley & Sons, Ltd.
Analysis of the NAEG model of transuranic radionuclide transport and dose
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kercher, J.R.; Anspaugh, L.R.
We analyze the model for estimating the dose from /sup 239/Pu developed for the Nevada Applied Ecology Group (NAEG) by using sensitivity analysis and uncertainty analysis. Sensitivity analysis results suggest that the air pathway is the critical pathway for the organs receiving the highest dose. Soil concentration and the factors controlling air concentration are the most important parameters. The only organ whose dose is sensitive to parameters in the ingestion pathway is the GI tract. The air pathway accounts for 100% of the dose to lung, upper respiratory tract, and thoracic lymph nodes; and 95% of its dose via ingestion.more » Leafy vegetable ingestion accounts for 70% of the dose from the ingestion pathway regardless of organ, peeled vegetables 20%; accidental soil ingestion 5%; ingestion of beef liver 4%; beef muscle 1%. Only a handful of model parameters control the dose for any one organ. The number of important parameters is usually less than 10. Uncertainty analysis indicates that choosing a uniform distribution for the input parameters produces a lognormal distribution of the dose. The ratio of the square root of the variance to the mean is three times greater for the doses than it is for the individual parameters. As found by the sensitivity analysis, the uncertainty analysis suggests that only a few parameters control the dose for each organ. All organs have similar distributions and variance to mean ratios except for the lymph modes. 16 references, 9 figures, 13 tables.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less
Uncertainty analysis in geospatial merit matrix–based hydropower resource assessment
Pasha, M. Fayzul K.; Yeasmin, Dilruba; Saetern, Sen; ...
2016-03-30
Hydraulic head and mean annual streamflow, two main input parameters in hydropower resource assessment, are not measured at every point along the stream. Translation and interpolation are used to derive these parameters, resulting in uncertainties. This study estimates the uncertainties and their effects on model output parameters: the total potential power and the number of potential locations (stream-reach). These parameters are quantified through Monte Carlo Simulation (MCS) linking with a geospatial merit matrix based hydropower resource assessment (GMM-HRA) Model. The methodology is applied to flat, mild, and steep terrains. Results show that the uncertainty associated with the hydraulic head ismore » within 20% for mild and steep terrains, and the uncertainty associated with streamflow is around 16% for all three terrains. Output uncertainty increases as input uncertainty increases. However, output uncertainty is around 10% to 20% of the input uncertainty, demonstrating the robustness of the GMM-HRA model. Hydraulic head is more sensitive to output parameters in steep terrain than in flat and mild terrains. Furthermore, mean annual streamflow is more sensitive to output parameters in flat terrain.« less
Large contribution of natural aerosols to uncertainty in indirect forcing
NASA Astrophysics Data System (ADS)
Carslaw, K. S.; Lee, L. A.; Reddington, C. L.; Pringle, K. J.; Rap, A.; Forster, P. M.; Mann, G. W.; Spracklen, D. V.; Woodhouse, M. T.; Regayre, L. A.; Pierce, J. R.
2013-11-01
The effect of anthropogenic aerosols on cloud droplet concentrations and radiative properties is the source of one of the largest uncertainties in the radiative forcing of climate over the industrial period. This uncertainty affects our ability to estimate how sensitive the climate is to greenhouse gas emissions. Here we perform a sensitivity analysis on a global model to quantify the uncertainty in cloud radiative forcing over the industrial period caused by uncertainties in aerosol emissions and processes. Our results show that 45 per cent of the variance of aerosol forcing since about 1750 arises from uncertainties in natural emissions of volcanic sulphur dioxide, marine dimethylsulphide, biogenic volatile organic carbon, biomass burning and sea spray. Only 34 per cent of the variance is associated with anthropogenic emissions. The results point to the importance of understanding pristine pre-industrial-like environments, with natural aerosols only, and suggest that improved measurements and evaluation of simulated aerosols in polluted present-day conditions will not necessarily result in commensurate reductions in the uncertainty of forcing estimates.
Uncertainty as Knowledge: Constraints on Policy Choices Provided by Analysis of Uncertainty
NASA Astrophysics Data System (ADS)
Lewandowsky, S.; Risbey, J.; Smithson, M.; Newell, B. R.
2012-12-01
Uncertainty forms an integral part of climate science, and it is often cited in connection with arguments against mitigative action. We argue that an analysis of uncertainty must consider existing knowledge as well as uncertainty, and the two must be evaluated with respect to the outcomes and risks associated with possible policy options. Although risk judgments are inherently subjective, an analysis of the role of uncertainty within the climate system yields two constraints that are robust to a broad range of assumptions. Those constraints are that (a) greater uncertainty about the climate system is necessarily associated with greater expected damages from warming, and (b) greater uncertainty translates into a greater risk of the failure of mitigation efforts. These ordinal constraints are unaffected by subjective or cultural risk-perception factors, they are independent of the discount rate, and they are independent of the magnitude of the estimate for climate sensitivity. The constraints mean that any appeal to uncertainty must imply a stronger, rather than weaker, need to cut greenhouse gas emissions than in the absence of uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
NASA Astrophysics Data System (ADS)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.
2018-03-01
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; ...
2018-02-09
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
NASA Astrophysics Data System (ADS)
Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.
2016-12-01
This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.
Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratama, Cecep, E-mail: great.pratama@gmail.com; Meilano, Irwan; Nugraha, Andri Dian
Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate formore » Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%.« less
Optimization for minimum sensitivity to uncertain parameters
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw
1994-01-01
A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom
2011-01-01
The need for a defendable and systematic uncertainty and sensitivity approach that conforms to the Code Scaling, Applicability, and Uncertainty (CSAU) process, and that could be used for a wide variety of software codes, was defined in 2008. The GRS (Gesellschaft für Anlagen und Reaktorsicherheit) company of Germany has developed one type of CSAU approach that is particularly well suited for legacy coupled core analysis codes, and a trial version of their commercial software product SUSA (Software for Uncertainty and Sensitivity Analyses) was acquired on May 12, 2010. This report summarized the results of the initial investigations performed with SUSA,more » utilizing a typical High Temperature Reactor benchmark (the IAEA CRP-5 PBMR 400MW Exercise 2) and the PEBBED-THERMIX suite of codes. The following steps were performed as part of the uncertainty and sensitivity analysis: 1. Eight PEBBED-THERMIX model input parameters were selected for inclusion in the uncertainty study: the total reactor power, inlet gas temperature, decay heat, and the specific heat capability and thermal conductivity of the fuel, pebble bed and reflector graphite. 2. The input parameters variations and probability density functions were specified, and a total of 800 PEBBED-THERMIX model calculations were performed, divided into 4 sets of 100 and 2 sets of 200 Steady State and Depressurized Loss of Forced Cooling (DLOFC) transient calculations each. 3. The steady state and DLOFC maximum fuel temperature, as well as the daily pebble fuel load rate data, were supplied to SUSA as model output parameters of interest. The 6 data sets were statistically analyzed to determine the 5% and 95% percentile values for each of the 3 output parameters with a 95% confidence level, and typical statistical indictors were also generated (e.g. Kendall, Pearson and Spearman coefficients). 4. A SUSA sensitivity study was performed to obtain correlation data between the input and output parameters, and to identify the primary contributors to the output data uncertainties. It was found that the uncertainties in the decay heat, pebble bed and reflector thermal conductivities were responsible for the bulk of the propagated uncertainty in the DLOFC maximum fuel temperature. It was also determined that the two standard deviation (2s) uncertainty on the maximum fuel temperature was between ±58oC (3.6%) and ±76oC (4.7%) on a mean value of 1604 oC. These values mostly depended on the selection of the distributions types, and not on the number of model calculations above the required Wilks criteria (a (95%,95%) statement would usually require 93 model runs).« less
Dong, Xin; Zhang, Xinyi; Zeng, Siyu
2017-04-01
In the context of sustainable development, there has been an increasing requirement for an eco-efficiency assessment of wastewater treatment plants (WWTPs). Data envelopment analysis (DEA), a technique that is widely applied for relative efficiency assessment, is used in combination with the tolerances approach to handle WWTPs' multiple inputs and outputs as well as their uncertainty. The economic cost, energy consumption, contaminant removal, and global warming effect during the treatment processes are integrated to interpret the eco-efficiency of WWTPs. A total of 736 sample plants from across China are assessed, and large sensitivities to variations in inputs and outputs are observed for most samples, with only three WWTPs identified as being stably efficient. Size of plant, overcapacity, climate type, and influent characteristics are proven to have a significant influence on both the mean efficiency and performance sensitivity of WWTPs, while no clear relationships were found between eco-efficiency and technology under the framework of uncertainty analysis. The incorporation of uncertainty quantification and environmental impact consideration has improved the liability and applicability of the assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Substructure Versus Property-Level Dispersed Modes Calculation
NASA Technical Reports Server (NTRS)
Stewart, Eric C.; Peck, Jeff A.; Bush, T. Jason; Fulcher, Clay W.
2016-01-01
This paper calculates the effect of perturbed finite element mass and stiffness values on the eigenvectors and eigenvalues of the finite element model. The structure is perturbed in two ways: at the "subelement" level and at the material property level. In the subelement eigenvalue uncertainty analysis the mass and stiffness of each subelement is perturbed by a factor before being assembled into the global matrices. In the property-level eigenvalue uncertainty analysis all material density and stiffness parameters of the structure are perturbed modified prior to the eigenvalue analysis. The eigenvalue and eigenvector dispersions of each analysis (subelement and property-level) are also calculated using an analytical sensitivity approximation. Two structural models are used to compare these methods: a cantilevered beam model, and a model of the Space Launch System. For each structural model it is shown how well the analytical sensitivity modes approximate the exact modes when the uncertainties are applied at the subelement level and at the property level.
Wu, Y.; Liu, S.
2012-01-01
Parameter optimization and uncertainty issues are a great challenge for the application of large environmental models like the Soil and Water Assessment Tool (SWAT), which is a physically-based hydrological model for simulating water and nutrient cycles at the watershed scale. In this study, we present a comprehensive modeling environment for SWAT, including automated calibration, and sensitivity and uncertainty analysis capabilities through integration with the R package Flexible Modeling Environment (FME). To address challenges (e.g., calling the model in R and transferring variables between Fortran and R) in developing such a two-language coupling framework, 1) we converted the Fortran-based SWAT model to an R function (R-SWAT) using the RFortran platform, and alternatively 2) we compiled SWAT as a Dynamic Link Library (DLL). We then wrapped SWAT (via R-SWAT) with FME to perform complex applications including parameter identifiability, inverse modeling, and sensitivity and uncertainty analysis in the R environment. The final R-SWAT-FME framework has the following key functionalities: automatic initialization of R, running Fortran-based SWAT and R commands in parallel, transferring parameters and model output between SWAT and R, and inverse modeling with visualization. To examine this framework and demonstrate how it works, a case study simulating streamflow in the Cedar River Basin in Iowa in the United Sates was used, and we compared it with the built-in auto-calibration tool of SWAT in parameter optimization. Results indicate that both methods performed well and similarly in searching a set of optimal parameters. Nonetheless, the R-SWAT-FME is more attractive due to its instant visualization, and potential to take advantage of other R packages (e.g., inverse modeling and statistical graphics). The methods presented in the paper are readily adaptable to other model applications that require capability for automated calibration, and sensitivity and uncertainty analysis.
Climate data induced uncertainty in model based estimations of terrestrial primary productivity
NASA Astrophysics Data System (ADS)
Wu, Z.; Ahlström, A.; Smith, B.; Ardö, J.; Eklundh, L.; Fensholt, R.; Lehsten, V.
2016-12-01
Models used to project global vegetation and carbon cycle differ in their estimates of historical fluxes and pools. These differences arise not only from differences between models but also from differences in the environmental and climatic data that forces the models. Here we investigate the role of uncertainties in historical climate data, encapsulated by a set of six historical climate datasets. We focus on terrestrial gross primary productivity (GPP) and analyze the results from a dynamic process-based vegetation model (LPJ-GUESS) forced by six different climate datasets and two empirical datasets of GPP (derived from flux towers and remote sensing). We find that the climate induced uncertainty, defined as the difference among historical simulations in GPP when forcing the model with the different climate datasets, can be as high as 33 Pg C yr-1 globally (19% of mean GPP). The uncertainty is partitioned into the three main climatic drivers, temperature, precipitation, and shortwave radiation. Additionally, we illustrate how the uncertainty due to a given climate driver depends both on the magnitude of the forcing data uncertainty (the data range) and the sensitivity of the modeled GPP to the driver (the ecosystem sensitivity). The analysis is performed globally and stratified into five land cover classes. We find that the dynamic vegetation model overestimates GPP, compared to empirically based GPP data over most areas, except for the tropical region. Both the simulations and empirical estimates agree that the tropical region is a disproportionate source of uncertainty in GPP estimation. This is mainly caused by uncertainties in shortwave radiation forcing, of which climate data range contributes slightly higher uncertainty than ecosystem sensitivity to shortwave radiation. We also find that precipitation dominated the climate induced uncertainty over nearly half of terrestrial vegetated surfaces, which is mainly due to large ecosystem sensitivity to precipitation. Overall, climate data ranges are found to contribute more to the climate induced uncertainty than ecosystem sensitivity. Our study highlights the need to better constrain tropical climate and demonstrate that uncertainty caused by climatic forcing data must be considered when comparing and evaluating model results and empirical datasets.
2010-01-01
Multi-Disciplinary, Multi-Output Sensitivity Analysis ( MIMOSA ) .........29 3.1 Introduction to Research Thrust 1...39 3.3 MIMOSA Approach ..........................................................................................41 3.3.1...Collaborative Consistency of MIMOSA .......................................................41 3.3.2 Formulation of MIMOSA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomlinson, E.T.; deSaussure, G.; Weisbin, C.R.
1977-03-01
The main purpose of the study is the determination of the sensitivity of TRX-2 thermal lattice performance parameters to nuclear cross section data, particularly the epithermal resonance capture cross section of /sup 238/U. An energy-dependent sensitivity profile was generated for each of the performance parameters, to the most important cross sections of the various isotopes in the lattice. Uncertainties in the calculated values of the performance parameters due to estimated uncertainties in the basic nuclear data, deduced in this study, were shown to be small compared to the uncertainties in the measured values of the performance parameter and compared tomore » differences among calculations based upon the same data but with different methodologies.« less
Understanding ozone response to its precursor emissions is crucial for effective air quality management practices. This nonlinear response is usually simulated using chemical transport models, and the modeling results are affected by uncertainties in emissions inputs. In this stu...
Dynamic Modeling of Cell-Free Biochemical Networks Using Effective Kinetic Models
2015-03-16
sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity Analysis of the Reduced Order Coagulation...sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the performance of the reduced order model [69]. We...Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates
Hattis, Dale; Goble, Robert; Chu, Margaret
2005-01-01
In an earlier report we developed a quantitative likelihood-based analysis of the differences in sensitivity of rodents to mutagenic carcinogens across three life stages (fetal, birth to weaning, and weaning to 60 days) relative to exposures in adult life. Here we draw implications for assessing human risks for full lifetime exposures, taking into account three types of uncertainties in making projections from the rodent data: uncertainty in the central estimates of the life-stage–specific sensitivity factors estimated earlier, uncertainty from chemical-to-chemical differences in life-stage–specific sensitivities for carcinogenesis, and uncertainty in the mapping of rodent life stages to human ages/exposure periods. Among the uncertainties analyzed, the mapping of rodent life stages to human ages/exposure periods is most important quantitatively (a range of several-fold in estimates of the duration of the human equivalent of the highest sensitivity “birth to weaning” period in rodents). The combined effects of these uncertainties are estimated with Monte Carlo analyses. Overall, the estimated population arithmetic mean risk from lifetime exposures at a constant milligrams per kilogram body weight level to a generic mutagenic carcinogen is about 2.8-fold larger than expected from adult-only exposure with 5–95% confidence limits of 1.5-to 6-fold. The mean estimates for the 0- to 2-year and 2- to 15-year periods are about 35–55% larger than the 10- and 3-fold sensitivity factor adjustments recently proposed by the U.S. Environmental Protection Agency. The present results are based on data for only nine chemicals, including five mutagens. Risk inferences will be altered as data become available for other chemicals. PMID:15811844
The propagation of wind errors through ocean wave hindcasts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holthuijsen, L.H.; Booij, N.; Bertotti, L.
1996-08-01
To estimate uncertainties in wave forecast and hindcasts, computations have been carried out for a location in the Mediterranean Sea using three different analyses of one historic wind field. These computations involve a systematic sensitivity analysis and estimated wind field errors. This technique enables a wave modeler to estimate such uncertainties in other forecasts and hindcasts if only one wind analysis is available.
Enhancing the Characterization of Epistemic Uncertainties in PM2.5 Risk Analyses.
Smith, Anne E; Gans, Will
2015-03-01
The Environmental Benefits Mapping and Analysis Program (BenMAP) is a software tool developed by the U.S. Environmental Protection Agency (EPA) that is widely used inside and outside of EPA to produce quantitative estimates of public health risks from fine particulate matter (PM2.5 ). This article discusses the purpose and appropriate role of a risk analysis tool to support risk management deliberations, and evaluates the functions of BenMAP in this context. It highlights the importance in quantitative risk analyses of characterization of epistemic uncertainty, or outright lack of knowledge, about the true risk relationships being quantified. This article describes and quantitatively illustrates sensitivities of PM2.5 risk estimates to several key forms of epistemic uncertainty that pervade those calculations: the risk coefficient, shape of the risk function, and the relative toxicity of individual PM2.5 constituents. It also summarizes findings from a review of U.S.-based epidemiological evidence regarding the PM2.5 risk coefficient for mortality from long-term exposure. That review shows that the set of risk coefficients embedded in BenMAP substantially understates the range in the literature. We conclude that BenMAP would more usefully fulfill its role as a risk analysis support tool if its functions were extended to better enable and prompt its users to characterize the epistemic uncertainties in their risk calculations. This requires expanded automatic sensitivity analysis functions and more recognition of the full range of uncertainty in risk coefficients. © 2014 Society for Risk Analysis.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
NASA Technical Reports Server (NTRS)
Stolarski, R. S.; Butler, D. M.; Rundel, R. D.
1977-01-01
A concise stratospheric model was used in a Monte-Carlo analysis of the propagation of reaction rate uncertainties through the calculation of an ozone perturbation due to the addition of chlorine. Two thousand Monte-Carlo cases were run with 55 reaction rates being varied. Excellent convergence was obtained in the output distributions because the model is sensitive to the uncertainties in only about 10 reactions. For a 1 ppby chlorine perturbation added to a 1.5 ppby chlorine background, the resultant 1 sigma uncertainty on the ozone perturbation is a factor of 1.69 on the high side and 1.80 on the low side. The corresponding 2 sigma factors are 2.86 and 3.23. Results are also given for the uncertainties, due to reaction rates, in the ambient concentrations of stratospheric species.
Uncertainty and sensitivity assessment of flood risk assessments
NASA Astrophysics Data System (ADS)
de Moel, H.; Aerts, J. C.
2009-12-01
Floods are one of the most frequent and costly natural disasters. In order to protect human lifes and valuable assets from the effect of floods many defensive structures have been build. Despite these efforts economic losses due to catastrophic flood events have, however, risen substantially during the past couple of decades because of continuing economic developments in flood prone areas. On top of that, climate change is expected to affect the magnitude and frequency of flood events. Because these ongoing trends are expected to continue, a transition can be observed in various countries to move from a protective flood management approach to a more risk based flood management approach. In a risk based approach, flood risk assessments play an important role in supporting decision making. Most flood risk assessments assess flood risks in monetary terms (damage estimated for specific situations or expected annual damage) in order to feed cost-benefit analysis of management measures. Such flood risk assessments contain, however, considerable uncertainties. This is the result from uncertainties in the many different input parameters propagating through the risk assessment and accumulating in the final estimate. Whilst common in some other disciplines, as with integrated assessment models, full uncertainty and sensitivity analyses of flood risk assessments are not so common. Various studies have addressed uncertainties regarding flood risk assessments, but have mainly focussed on the hydrological conditions. However, uncertainties in other components of the risk assessment, like the relation between water depth and monetary damage, can be substantial as well. This research therefore tries to assess the uncertainties of all components of monetary flood risk assessments, using a Monte Carlo based approach. Furthermore, the total uncertainty will also be attributed to the different input parameters using a variance based sensitivity analysis. Assessing and visualizing the uncertainties of the final risk estimate will be helpful to decision makers to make better informed decisions and attributing this uncertainty to the input parameters helps to identify which parameters are most important when it comes to uncertainty in the final estimate and should therefore deserve additional attention in further research.
Strain Gauge Balance Uncertainty Analysis at NASA Langley: A Technical Review
NASA Technical Reports Server (NTRS)
Tripp, John S.
1999-01-01
This paper describes a method to determine the uncertainties of measured forces and moments from multi-component force balances used in wind tunnel tests. A multivariate regression technique is first employed to estimate the uncertainties of the six balance sensitivities and 156 interaction coefficients derived from established balance calibration procedures. These uncertainties are then employed to calculate the uncertainties of force-moment values computed from observed balance output readings obtained during tests. Confidence and prediction intervals are obtained for each computed force and moment as functions of the actual measurands. Techniques are discussed for separate estimation of balance bias and precision uncertainties.
A Statistics-Based Material Property Analysis to Support TPS Characterization
NASA Technical Reports Server (NTRS)
Copeland, Sean R.; Cozmuta, Ioana; Alonso, Juan J.
2012-01-01
Accurate characterization of entry capsule heat shield material properties is a critical component in modeling and simulating Thermal Protection System (TPS) response in a prescribed aerothermal environment. The thermal decomposition of the TPS material during the pyrolysis and charring processes is poorly characterized and typically results in large uncertainties in material properties as inputs for ablation models. These material property uncertainties contribute to large design margins on flight systems and cloud re- construction efforts for data collected during flight and ground testing, making revision to existing models for entry systems more challenging. The analysis presented in this work quantifies how material property uncertainties propagate through an ablation model and guides an experimental test regimen aimed at reducing these uncertainties and characterizing the dependencies between properties in the virgin and charred states for a Phenolic Impregnated Carbon Ablator (PICA) based TPS. A sensitivity analysis identifies how the high-fidelity model behaves in the expected flight environment, while a Monte Carlo based uncertainty propagation strategy is used to quantify the expected spread in the in-depth temperature response of the TPS. An examination of how perturbations to the input probability density functions affect output temperature statistics is accomplished using a Kriging response surface of the high-fidelity model. Simulations are based on capsule configuration and aerothermal environments expected during the Mars Science Laboratory (MSL) entry sequence. We identify and rank primary sources of uncertainty from material properties in a flight-relevant environment, show the dependence on spatial orientation and in-depth location on those uncertainty contributors, and quantify how sensitive the expected results are.
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
Sensitivity Analysis of Expected Wind Extremes over the Northwestern Sahara and High Atlas Region.
NASA Astrophysics Data System (ADS)
Garcia-Bustamante, E.; González-Rouco, F. J.; Navarro, J.
2017-12-01
A robust statistical framework in the scientific literature allows for the estimation of probabilities of occurrence of severe wind speeds and wind gusts, but does not prevent however from large uncertainties associated with the particular numerical estimates. An analysis of such uncertainties is thus required. A large portion of this uncertainty arises from the fact that historical observations are inherently shorter that the timescales of interest for the analysis of return periods. Additional uncertainties stem from the different choices of probability distributions and other aspects related to methodological issues or physical processes involved. The present study is focused on historical observations over the Ouarzazate Valley (Morocco) and in a high-resolution regional simulation of the wind in the area of interest. The aim is to provide extreme wind speed and wind gust return values and confidence ranges based on a systematic sampling of the uncertainty space for return periods up to 120 years.
A framework for sensitivity analysis of decision trees.
Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław
2018-01-01
In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.
NASA Astrophysics Data System (ADS)
Yan, Fang; Winijkul, Ekbordin; Bond, Tami C.; Streets, David G.
2014-04-01
Estimates of future emissions are necessary for understanding the future health of the atmosphere, designing national and international strategies for air quality control, and evaluating mitigation policies. Emission inventories are uncertain and future projections even more so, thus it is important to quantify the uncertainty inherent in emission projections. This paper is the second in a series that seeks to establish a more mechanistic understanding of future air pollutant emissions based on changes in technology. The first paper in this series (Yan et al., 2011) described a model that projects emissions based on dynamic changes of vehicle fleet, Speciated Pollutant Emission Wizard-Trend, or SPEW-Trend. In this paper, we explore the underlying uncertainties of global and regional exhaust PM emission projections from on-road vehicles in the coming decades using sensitivity analysis and Monte Carlo simulation. This work examines the emission sensitivities due to uncertainties in retirement rate, timing of emission standards, transition rate of high-emitting vehicles called “superemitters”, and emission factor degradation rate. It is concluded that global emissions are most sensitive to parameters in the retirement rate function. Monte Carlo simulations show that emission uncertainty caused by lack of knowledge about technology composition is comparable to the uncertainty demonstrated by alternative economic scenarios, especially during the period 2010-2030.
Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Clifford W.; Martin, Curtis E.
2015-08-01
We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature;more » (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.« less
NASA Astrophysics Data System (ADS)
Nepal, S.
2016-12-01
The spatial transferability of the model parameters of the process-oriented distributed J2000 hydrological model was investigated in two glaciated sub-catchments of the Koshi river basin in eastern Nepal. The basins had a high degree of similarity with respect to their static landscape features. The model was first calibrated (1986-1991) and validated (1992-1997) in the Dudh Koshi sub-catchment. The calibrated and validated model parameters were then transferred to the nearby Tamor catchment (2001-2009). A sensitivity and uncertainty analysis was carried out for both sub-catchments to discover the sensitivity range of the parameters in the two catchments. The model represented the overall hydrograph well in both sub-catchments, including baseflow and medium range flows (rising and recession limbs). The efficiency results according to both Nash-Sutcliffe and the coefficient of determination was above 0.84 in both cases. The sensitivity analysis showed that the same parameter was most sensitive for Nash-Sutcliffe (ENS) and Log Nash-Sutcliffe (LNS) efficiencies in both catchments. However, there were some differences in sensitivity to ENS and LNS for moderate and low sensitive parameters, although the majority (13 out of 16 for ENS and 16 out of 16 for LNS) had a sensitivity response in a similar range. A generalized likelihood uncertainty estimation (GLUE) result suggest that most of the time the observed runoff is within the parameter uncertainty range, although occasionally the values lie outside the uncertainty range, especially during flood peaks and more in the Tamor. This may be due to the limited input data resulting from the small number of precipitation stations and lack of representative stations in high-altitude areas, as well as to model structural uncertainty. The results indicate that transfer of the J2000 parameters to a neighboring catchment in the Himalayan region with similar physiographic landscape characteristics is viable. This indicates the possibility of applying process-based J2000 model be to the ungauged catchments in the Himalayan region, which could provide important insights into the hydrological system dynamics and provide much needed information to support water resources planning and management.
Probabilistic structural analysis of a truss typical for space station
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.
1990-01-01
A three-bay, space, cantilever truss is probabilistically evaluated using the computer code NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) to identify and quantify the uncertainties and respective sensitivities associated with corresponding uncertainties in the primitive variables (structural, material, and loads parameters) that defines the truss. The distribution of each of these primitive variables is described in terms of one of several available distributions such as the Weibull, exponential, normal, log-normal, etc. The cumulative distribution function (CDF's) for the response functions considered and sensitivities associated with the primitive variables for given response are investigated. These sensitivities help in determining the dominating primitive variables for that response.
Helium Mass Spectrometer Leak Detection: A Method to Quantify Total Measurement Uncertainty
NASA Technical Reports Server (NTRS)
Mather, Janice L.; Taylor, Shawn C.
2015-01-01
In applications where leak rates of components or systems are evaluated against a leak rate requirement, the uncertainty of the measured leak rate must be included in the reported result. However, in the helium mass spectrometer leak detection method, the sensitivity, or resolution, of the instrument is often the only component of the total measurement uncertainty noted when reporting results. To address this shortfall, a measurement uncertainty analysis method was developed that includes the leak detector unit's resolution, repeatability, hysteresis, and drift, along with the uncertainty associated with the calibration standard. In a step-wise process, the method identifies the bias and precision components of the calibration standard, the measurement correction factor (K-factor), and the leak detector unit. Together these individual contributions to error are combined and the total measurement uncertainty is determined using the root-sum-square method. It was found that the precision component contributes more to the total uncertainty than the bias component, but the bias component is not insignificant. For helium mass spectrometer leak rate tests where unit sensitivity alone is not enough, a thorough evaluation of the measurement uncertainty such as the one presented herein should be performed and reported along with the leak rate value.
CO2 Push-Pull Dual (Conjugate) Faults Injection Simulations
Oldenburg, Curtis (ORCID:0000000201326016); Lee, Kyung Jae; Doughty, Christine; Jung, Yoojin; Borgia, Andrea; Pan, Lehua; Zhang, Rui; Daley, Thomas M.; Altundas, Bilgin; Chugunov, Nikita
2017-07-20
This submission contains datasets and a final manuscript associated with a project simulating carbon dioxide push-pull into a conjugate fault system modeled after Dixie Valley- sensitivity analysis of significant parameters and uncertainty prediction by data-worth analysis. Datasets include: (1) Forward simulation runs of standard cases (push & pull phases), (2) Local sensitivity analyses (push & pull phases), and (3) Data-worth analysis (push & pull phases).
NASA Astrophysics Data System (ADS)
Ricciuto, D. M.; Mei, R.; Mao, J.; Hoffman, F. M.; Kumar, J.
2015-12-01
Uncertainties in land parameters could have important impacts on simulated water and energy fluxes and land surface states, which will consequently affect atmospheric and biogeochemical processes. Therefore, quantification of such parameter uncertainties using a land surface model is the first step towards better understanding of predictive uncertainty in Earth system models. In this study, we applied a random-sampling, high-dimensional model representation (RS-HDMR) method to analyze the sensitivity of simulated photosynthesis, surface energy fluxes and surface hydrological components to selected land parameters in version 4.5 of the Community Land Model (CLM4.5). Because of the large computational expense of conducting ensembles of global gridded model simulations, we used the results of a previous cluster analysis to select one thousand representative land grid cells for simulation. Plant functional type (PFT)-specific uniform prior ranges for land parameters were determined using expert opinion and literature survey, and samples were generated with a quasi-Monte Carlo approach-Sobol sequence. Preliminary analysis of 1024 simulations suggested that four PFT-dependent parameters (including slope of the conductance-photosynthesis relationship, specific leaf area at canopy top, leaf C:N ratio and fraction of leaf N in RuBisco) are the dominant sensitive parameters for photosynthesis, surface energy and water fluxes across most PFTs, but with varying importance rankings. On the other hand, for surface ans sub-surface runoff, PFT-independent parameters, such as the depth-dependent decay factors for runoff, play more important roles than the previous four PFT-dependent parameters. Further analysis by conditioning the results on different seasons and years are being conducted to provide guidance on how climate variability and change might affect such sensitivity. This is the first step toward coupled simulations including biogeochemical processes, atmospheric processes or both to determine the full range of sensitivity of Earth system modeling to land-surface parameters. This can facilitate sampling strategies in measurement campaigns targeted at reduction of climate modeling uncertainties and can also provide guidance on land parameter calibration for simulation optimization.
Uncertainty analysis of hydrological modeling in a tropical area using different algorithms
NASA Astrophysics Data System (ADS)
Rafiei Emam, Ammar; Kappas, Martin; Fassnacht, Steven; Linh, Nguyen Hoang Khanh
2018-01-01
Hydrological modeling outputs are subject to uncertainty resulting from different sources of errors (e.g., error in input data, model structure, and model parameters), making quantification of uncertainty in hydrological modeling imperative and meant to improve reliability of modeling results. The uncertainty analysis must solve difficulties in calibration of hydrological models, which further increase in areas with data scarcity. The purpose of this study is to apply four uncertainty analysis algorithms to a semi-distributed hydrological model, quantifying different source of uncertainties (especially parameter uncertainty) and evaluate their performance. In this study, the Soil and Water Assessment Tools (SWAT) eco-hydrological model was implemented for the watershed in the center of Vietnam. The sensitivity of parameters was analyzed, and the model was calibrated. The uncertainty analysis for the hydrological model was conducted based on four algorithms: Generalized Likelihood Uncertainty Estimation (GLUE), Sequential Uncertainty Fitting (SUFI), Parameter Solution method (ParaSol) and Particle Swarm Optimization (PSO). The performance of the algorithms was compared using P-factor and Rfactor, coefficient of determination (R 2), the Nash Sutcliffe coefficient of efficiency (NSE) and Percent Bias (PBIAS). The results showed the high performance of SUFI and PSO with P-factor>0.83, R-factor <0.56 and R 2>0.91, NSE>0.89, and 0.18
USDA-ARS?s Scientific Manuscript database
Agricultural system models have become important tools in studying water and nitrogen (N) dynamics, as well as crop growth, under different management practices. Complexity in input parameters often leads to significant uncertainty when simulating dynamic processes such as nitrate leaching or crop y...
The National Center for Environmental Assessment (NCEA) has conducted and supported research addressing uncertainties in 2-stage clonal growth models for cancer as applied to formaldehyde. In this report, we summarized publications resulting from this research effort, discussed t...
NASA Astrophysics Data System (ADS)
Li, Y.; Kinzelbach, W.; Zhou, J.; Cheng, G. D.; Li, X.
2012-05-01
The hydrologic model HYDRUS-1-D and the crop growth model WOFOST are coupled to efficiently manage water resources in agriculture and improve the prediction of crop production. The results of the coupled model are validated by experimental studies of irrigated-maize done in the middle reaches of northwest China's Heihe River, a semi-arid to arid region. Good agreement is achieved between the simulated evapotranspiration, soil moisture and crop production and their respective field measurements made under current maize irrigation and fertilization. Based on the calibrated model, the scenario analysis reveals that the most optimal amount of irrigation is 500-600 mm in this region. However, for regions without detailed observation, the results of the numerical simulation can be unreliable for irrigation decision making owing to the shortage of calibrated model boundary conditions and parameters. So, we develop a method of combining model ensemble simulations and uncertainty/sensitivity analysis to speculate the probability of crop production. In our studies, the uncertainty analysis is used to reveal the risk of facing a loss of crop production as irrigation decreases. The global sensitivity analysis is used to test the coupled model and further quantitatively analyse the impact of the uncertainty of coupled model parameters and environmental scenarios on crop production. This method can be used for estimation in regions with no or reduced data availability.
NASA Astrophysics Data System (ADS)
Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten
2015-04-01
Predicting flood inundation extents using hydraulic models is subject to a number of critical uncertainties. For a specific event, these uncertainties are known to have a large influence on model outputs and any subsequent analyses made by risk managers. Hydraulic modellers often approach such problems by applying uncertainty analysis techniques such as the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. However, these methods do not allow one to attribute which source of uncertainty has the most influence on the various model outputs that inform flood risk decision making. Another issue facing modellers is the amount of computational resource that is available to spend on modelling flood inundations that are 'fit for purpose' to the modelling objectives. Therefore a balance needs to be struck between computation time, realism and spatial resolution, and effectively characterising the uncertainty spread of predictions (for example from boundary conditions and model parameterisations). However, it is not fully understood how much of an impact each factor has on model performance, for example how much influence changing the spatial resolution of a model has on inundation predictions in comparison to other uncertainties inherent in the modelling process. Furthermore, when resampling fine scale topographic data in the form of a Digital Elevation Model (DEM) to coarser resolutions, there are a number of possible coarser DEMs that can be produced. Deciding which DEM is then chosen to represent the surface elevations in the model could also influence model performance. In this study we model a flood event using the hydraulic model LISFLOOD-FP and apply Sobol' Sensitivity Analysis to estimate which input factor, among the uncertainty in model boundary conditions, uncertain model parameters, the spatial resolution of the DEM and the choice of resampled DEM, have the most influence on a range of model outputs. These outputs include whole domain maximum inundation indicators and flood wave travel time in addition to temporally and spatially variable indicators. This enables us to assess whether the sensitivity of the model to various input factors is stationary in both time and space. Furthermore, competing models are assessed against observations of water depths from a historical flood event. Consequently we are able to determine which of the input factors has the most influence on model performance. Initial findings suggest the sensitivity of the model to different input factors varies depending on the type of model output assessed and at what stage during the flood hydrograph the model output is assessed. We have also found that initial decisions regarding the characterisation of the input factors, for example defining the upper and lower bounds of the parameter sample space, can be significant in influencing the implied sensitivities.
Dos Muchangos, Leticia Sarmento; Tokai, Akihiro; Hanashima, Atsuko
2017-01-01
Material flow analysis can effectively trace and quantify the flows and stocks of materials such as solid wastes in urban environments. However, the integrity of material flow analysis results is compromised by data uncertainties, an occurrence that is particularly acute in low-and-middle-income study contexts. This article investigates the uncertainties in the input data and their effects in a material flow analysis study of municipal solid waste management in Maputo City, the capital of Mozambique. The analysis is based on data collected in 2007 and 2014. Initially, the uncertainties and their ranges were identified by the data classification model of Hedbrant and Sörme, followed by the application of sensitivity analysis. The average lower and upper bounds were 29% and 71%, respectively, in 2007, increasing to 41% and 96%, respectively, in 2014. This indicates higher data quality in 2007 than in 2014. Results also show that not only data are partially missing from the established flows such as waste generation to final disposal, but also that they are limited and inconsistent in emerging flows and processes such as waste generation to material recovery (hence the wider variation in the 2014 parameters). The sensitivity analysis further clarified the most influencing parameter and the degree of influence of each parameter on the waste flows and the interrelations among the parameters. The findings highlight the need for an integrated municipal solid waste management approach to avoid transferring or worsening the negative impacts among the parameters and flows.
NASA Astrophysics Data System (ADS)
Wielicki, B. A.; Cooke, R. M.; Golub, A. A.; Mlynczak, M. G.; Young, D. F.; Baize, R. R.
2016-12-01
Several previous studies have been published on the economic value of narrowing the uncertainty in climate sensitivity (Cooke et al. 2015, Cooke et al. 2016, Hope, 2015). All three of these studies estimated roughly 10 Trillion U.S. dollars for the Net Present Value and Real Option Value at a discount rate of 3%. This discount rate is the nominal discount rate used in the U.S. Social Cost of Carbon Memo (2010). The Cooke et al studies approached this problem by examining advances in accuracy of global temperature measurements, while the Hope 2015 study did not address the type of observations required. While temperature change is related to climate sensitivity, large uncertainties of a factor of 3 in current anthropogenic radiative forcing (IPCC, 2013) would need to be solved for advanced decadal temperature change observations to assist the challenge of narrowing climate sensitivity. The present study takes a new approach by extending the Cooke et al. 2015,2016 papers to replace observations of temperature change to observations of decadal change in the effects of changing clouds on the Earths radiative energy balance, a measurement known as Cloud Radiative Forcing, or Cloud Radiative Effect. Decadal change in this observation is direclty related to the largest uncertainty in climate sensitivity which is cloud feedback from changing amount of low clouds, primarily low clouds over the world's oceans. As a result, decadal changes in shortwave cloud radiative forcing are more directly related to cloud feedback uncertainty which is the dominant uncertainty in climate sensitivity. This paper will show results for the new approach, and allow an examination of the sensitivity of economic value results to different observations used as a constraint on uncertainty in climate sensitivity. The analysis suggests roughly a doubling of economic value to 20 Trillion Net Present Value or Real Option Value at 3% discount rate. The higher economic value results from two changes: a larger increase in accuracy for SW cloud radiative forcing vs temperature, and from a lower confounding noise from natural variability in the cloud radiative forcing variable compared to temperature. In particular, global average temperature is much more sensitive to the climate noise of ENSO cycles.
Mumby, Peter J; van Woesik, Robert
2014-05-19
Coral reefs are highly sensitive to the stress associated with greenhouse gas emissions, in particular ocean warming and acidification. While experiments show negative responses of most reef organisms to ocean warming, some autotrophs benefit from ocean acidification. Yet, we are uncertain of the response of coral reefs as systems. We begin by reviewing sources of uncertainty and complexity including the translation of physiological effects into demographic processes, indirect ecological interactions among species, the ability of coral reefs to modify their own chemistry, adaptation and trans-generational plasticity. We then incorporate these uncertainties into two simple qualitative models of a coral reef system under climate change. Some sources of uncertainty are far more problematic than others. Climate change is predicted to have an unambiguous negative effect on corals that is robust to several sources of uncertainty but sensitive to the degree of biogeochemical coupling between benthos and seawater. Macroalgal, zoanthid, and herbivorous fish populations are generally predicted to increase, but the ambiguity (confidence) of such predictions are sensitive to the source of uncertainty. For example, reversing the effect of climate-related stress on macroalgae from being positive to negative had no influence on system behaviour. By contrast, the system was highly sensitive to a change in the stress upon herbivorous fishes. Minor changes in competitive interactions had profound impacts on system behaviour, implying that the outcomes of mesocosm studies could be highly sensitive to the choice of taxa. We use our analysis to identify new hypotheses and suggest that the effects of climatic stress on coral reefs provide an exceptional opportunity to test emerging theories of ecological inheritance. Copyright © 2014 Elsevier Ltd. All rights reserved.
Evaluation of uncertainties in the CRCM-simulated North American climate
NASA Astrophysics Data System (ADS)
de Elía, Ramón; Caya, Daniel; Côté, Hélène; Frigon, Anne; Biner, Sébastien; Giguère, Michel; Paquin, Dominique; Harvey, Richard; Plummer, David
2008-02-01
This work is a first step in the analysis of uncertainty sources in the RCM-simulated climate over North America. Three main sets of sensitivity studies were carried out: the first estimates the magnitude of internal variability, which is needed to evaluate the significance of changes in the simulated climate induced by any model modification. The second is devoted to the role of CRCM configuration as a source of uncertainty, in particular the sensitivity to nesting technique, domain size, and driving reanalysis. The third study aims to assess the relative importance of the previously estimated sensitivities by performing two additional sensitivity experiments: one, in which the reanalysis driving data is replaced by data generated by the second generation Coupled Global Climate Model (CGCM2), and another, in which a different CRCM version is used. Results show that the internal variability, triggered by differences in initial conditions, is much smaller than the sensitivity to any other source. Results also show that levels of uncertainty originating from liberty of choices in the definition of configuration parameters are comparable among themselves and are smaller than those due to the choice of CGCM or CRCM version used. These results suggest that uncertainty originated by the CRCM configuration latitude (freedom of choice among domain sizes, nesting techniques and reanalysis dataset), although important, does not seem to be a major obstacle to climate downscaling. Finally, with the aim of evaluating the combined effect of the different uncertainties, the ensemble spread is estimated for a subset of the analysed simulations. Results show that downscaled surface temperature is in general more uncertain in the northern regions, while precipitation is more uncertain in the central and eastern US.
NASA Astrophysics Data System (ADS)
Badawy, B.; Fletcher, C. G.
2017-12-01
The parameterization of snow processes in land surface models is an important source of uncertainty in climate simulations. Quantifying the importance of snow-related parameters, and their uncertainties, may therefore lead to better understanding and quantification of uncertainty within integrated earth system models. However, quantifying the uncertainty arising from parameterized snow processes is challenging due to the high-dimensional parameter space, poor observational constraints, and parameter interaction. In this study, we investigate the sensitivity of the land simulation to uncertainty in snow microphysical parameters in the Canadian LAnd Surface Scheme (CLASS) using an uncertainty quantification (UQ) approach. A set of training cases (n=400) from CLASS is used to sample each parameter across its full range of empirical uncertainty, as determined from available observations and expert elicitation. A statistical learning model using support vector regression (SVR) is then constructed from the training data (CLASS output variables) to efficiently emulate the dynamical CLASS simulations over a much larger (n=220) set of cases. This approach is used to constrain the plausible range for each parameter using a skill score, and to identify the parameters with largest influence on the land simulation in CLASS at global and regional scales, using a random forest (RF) permutation importance algorithm. Preliminary sensitivity tests indicate that snow albedo refreshment threshold and the limiting snow depth, below which bare patches begin to appear, have the highest impact on snow output variables. The results also show a considerable reduction of the plausible ranges of the parameters values and hence reducing their uncertainty ranges, which can lead to a significant reduction of the model uncertainty. The implementation and results of this study will be presented and discussed in details.
Pasta, D J; Taylor, J L; Henning, J M
1999-01-01
Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.
The application of sensitivity analysis to models of large scale physiological systems
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1974-01-01
A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.
NASA Astrophysics Data System (ADS)
Sun, Mei; Zhang, Xiaolin; Huo, Zailin; Feng, Shaoyuan; Huang, Guanhua; Mao, Xiaomin
2016-03-01
Quantitatively ascertaining and analyzing the effects of model uncertainty on model reliability is a focal point for agricultural-hydrological models due to more uncertainties of inputs and processes. In this study, the generalized likelihood uncertainty estimation (GLUE) method with Latin hypercube sampling (LHS) was used to evaluate the uncertainty of the RZWQM-DSSAT (RZWQM2) model outputs responses and the sensitivity of 25 parameters related to soil properties, nutrient transport and crop genetics. To avoid the one-sided risk of model prediction caused by using a single calibration criterion, the combined likelihood (CL) function integrated information concerning water, nitrogen, and crop production was introduced in GLUE analysis for the predictions of the following four model output responses: the total amount of water content (T-SWC) and the nitrate nitrogen (T-NIT) within the 1-m soil profile, the seed yields of waxy maize (Y-Maize) and winter wheat (Y-Wheat). In the process of evaluating RZWQM2, measurements and meteorological data were obtained from a field experiment that involved a winter wheat and waxy maize crop rotation system conducted from 2003 to 2004 in southern Beijing. The calibration and validation results indicated that RZWQM2 model can be used to simulate the crop growth and water-nitrogen migration and transformation in wheat-maize crop rotation planting system. The results of uncertainty analysis using of GLUE method showed T-NIT was sensitive to parameters relative to nitrification coefficient, maize growth characteristics on seedling period, wheat vernalization period, and wheat photoperiod. Parameters on soil saturated hydraulic conductivity, nitrogen nitrification and denitrification, and urea hydrolysis played an important role in crop yield component. The prediction errors for RZWQM2 outputs with CL function were relatively lower and uniform compared with other likelihood functions composed of individual calibration criterion. This new and successful application of the GLUE method for determining the uncertainty and sensitivity of the RZWQM2 could provide a reference for the optimization of model parameters with different emphases according to research interests.
Eslick, John C.; Ng, Brenda; Gao, Qianwen; ...
2014-12-31
Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less
Phu, Jack; Kalloniatis, Michael; Khuu, Sieu K.
2018-01-01
Purpose Current clinical perimetric test paradigms present stimuli randomly to various locations across the visual field (VF), inherently introducing spatial uncertainty, which reduces contrast sensitivity. In the present study, we determined the extent to which spatial uncertainty affects contrast sensitivity in glaucoma patients by minimizing spatial uncertainty through attentional cueing. Methods Six patients with open-angle glaucoma and six healthy subjects underwent laboratory-based psychophysical testing to measure contrast sensitivity at preselected locations at two eccentricities (9.5° and 17.5°) with two stimulus sizes (Goldmann sizes III and V) under different cueing conditions: 1, 2, 4, or 8 points verbally cued. Method of Constant Stimuli and a single-interval forced-choice procedure were used to generate frequency of seeing (FOS) curves at locations with and without VF defects. Results At locations with VF defects, cueing minimizes spatial uncertainty and improves sensitivity under all conditions. The effect of cueing was maximal when one point was cued, and rapidly diminished when more points were cued (no change to baseline with 8 points cued). The slope of the FOS curve steepened with reduced spatial uncertainty. Locations with normal sensitivity in glaucomatous eyes had similar performance to that of healthy subjects. There was a systematic increase in uncertainty with the depth of VF loss. Conclusions Sensitivity measurements across the VF are negatively affected by spatial uncertainty, which increases with greater VF loss. Minimizing uncertainty can improve sensitivity at locations of deficit. Translational Relevance Current perimetric techniques introduce spatial uncertainty and may therefore underestimate sensitivity in regions of VF loss. PMID:29600116
Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques
NASA Astrophysics Data System (ADS)
Mai, J.; Tolson, B.
2017-12-01
The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed sensitivity results. This is one step towards reliable and transferable, published sensitivity results.
Attentional Mechanisms in Simple Visual Detection: A Speed-Accuracy Trade-Off Analysis
ERIC Educational Resources Information Center
Liu, Charles C.; Wolfgang, Bradley J.; Smith, Philip L.
2009-01-01
Recent spatial cuing studies have shown that detection sensitivity can be increased by the allocation of attention. This increase has been attributed to one of two mechanisms: signal enhancement or uncertainty reduction. Signal enhancement is an increase in the signal-to-noise ratio at the cued location; uncertainty reduction is a reduction in the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Don; Rearden, Bradley T; Hollenbach, Daniel F
2009-02-01
The Radiochemical Development Facility at Oak Ridge National Laboratory has been storing solid materials containing 233U for decades. Preparations are under way to process these materials into a form that is inherently safe from a nuclear criticality safety perspective. This will be accomplished by down-blending the {sup 233}U materials with depleted or natural uranium. At the request of the U.S. Department of Energy, a study has been performed using the SCALE sensitivity and uncertainty analysis tools to demonstrate how these tools could be used to validate nuclear criticality safety calculations of selected process and storage configurations. ISOTEK nuclear criticality safetymore » staff provided four models that are representative of the criticality safety calculations for which validation will be needed. The SCALE TSUNAMI-1D and TSUNAMI-3D sequences were used to generate energy-dependent k{sub eff} sensitivity profiles for each nuclide and reaction present in the four safety analysis models, also referred to as the applications, and in a large set of critical experiments. The SCALE TSUNAMI-IP module was used together with the sensitivity profiles and the cross-section uncertainty data contained in the SCALE covariance data files to propagate the cross-section uncertainties ({Delta}{sigma}/{sigma}) to k{sub eff} uncertainties ({Delta}k/k) for each application model. The SCALE TSUNAMI-IP module was also used to evaluate the similarity of each of the 672 critical experiments with each application. Results of the uncertainty analysis and similarity assessment are presented in this report. A total of 142 experiments were judged to be similar to application 1, and 68 experiments were judged to be similar to application 2. None of the 672 experiments were judged to be adequately similar to applications 3 and 4. Discussion of the uncertainty analysis and similarity assessment is provided for each of the four applications. Example upper subcritical limits (USLs) were generated for application 1 based on trending of the energy of average lethargy of neutrons causing fission, trending of the TSUNAMI similarity parameters, and use of data adjustment techniques.« less
Lutchen, K R
1990-08-01
A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.
Assessment of uncertainties of the models used in thermal-hydraulic computer codes
NASA Astrophysics Data System (ADS)
Gricay, A. S.; Migrov, Yu. A.
2015-09-01
The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.
NASA Astrophysics Data System (ADS)
Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.
2002-05-01
Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more information by quantifying the relative importance of each input parameter in predicting the model response. However, in these complex, high dimensional eco-system models, represented by the RWMS model, the dynamics of the systems can act in a non-linear manner. Quantitatively assessing the importance of input variables becomes more difficult as the dimensionality, the non-linearities, and the non-monotonicities of the model increase. Methods from data mining such as Multivariate Adaptive Regression Splines (MARS) and the Fourier Amplitude Sensitivity Test (FAST) provide tools that can be used in global sensitivity analysis in these high dimensional, non-linear situations. The enhanced interpretability of model output provided by the quantitative measures estimated by these global sensitivity analysis tools will be demonstrated using the RWMS model.
Global sensitivity analysis of groundwater transport
NASA Astrophysics Data System (ADS)
Cvetkovic, V.; Soltani, S.; Vigouroux, G.
2015-12-01
In this work we address the model and parametric sensitivity of groundwater transport using the Lagrangian-Stochastic Advection-Reaction (LaSAR) methodology. The 'attenuation index' is used as a relevant and convenient measure of the coupled transport mechanisms. The coefficients of variation (CV) for seven uncertain parameters are assumed to be between 0.25 and 3.5, the highest value being for the lower bound of the mass transfer coefficient k0 . In almost all cases, the uncertainties in the macro-dispersion (CV = 0.35) and in the mass transfer rate k0 (CV = 3.5) are most significant. The global sensitivity analysis using Sobol and derivative-based indices yield consistent rankings on the significance of different models and/or parameter ranges. The results presented here are generic however the proposed methodology can be easily adapted to specific conditions where uncertainty ranges in models and/or parameters can be estimated from field and/or laboratory measurements.
Comparison of dose response functions for EBT3 model GafChromic™ film dosimetry system.
Aldelaijan, Saad; Devic, Slobodan
2018-05-01
Different dose response functions of EBT3 model GafChromic™ film dosimetry system have been compared in terms of sensitivity as well as uncertainty vs. error analysis. We also made an assessment of the necessity of scanning film pieces before and after irradiation. Pieces of EBT3 film model were irradiated to different dose values in Solid Water (SW) phantom. Based on images scanned in both reflection and transmission mode before and after irradiation, twelve different response functions were calculated. For every response function, a reference radiochromic film dosimetry system was established by generating calibration curve and by performing the error vs. uncertainty analysis. Response functions using pixel values from the green channel demonstrated the highest sensitivity in both transmission and reflection mode. All functions were successfully fitted with rational functional form, and provided an overall one-sigma uncertainty of better than 2% for doses above 2 Gy. Use of pre-scanned images to calculate response functions resulted in negligible improvement in dose measurement accuracy. Although reflection scanning mode provides higher sensitivity and could lead to a more widespread use of radiochromic film dosimetry, it has fairly limited dose range and slightly increased uncertainty when compared to transmission scan based response functions. Double-scanning technique, either in transmission or reflection mode, shows negligible improvement in dose accuracy as well as a negligible increase in dose uncertainty. Normalized pixel value of the images scanned in transmission mode shows linear response in a dose range of up to 11 Gy. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Johnston, John D.; Parrish, Keith; Howard, Joseph M.; Mosier, Gary E.; McGinnis, Mark; Bluth, Marcel; Kim, Kevin; Ha, Hong Q.
2004-01-01
This is a continuation of a series of papers on modeling activities for JWST. The structural-thermal- optical, often referred to as "STOP", analysis process is used to predict the effect of thermal distortion on optical performance. The benchmark STOP analysis for JWST assesses the effect of an observatory slew on wavefront error. The paper begins an overview of multi-disciplinary engineering analysis, or integrated modeling, which is a critical element of the JWST mission. The STOP analysis process is then described. This process consists of the following steps: thermal analysis, structural analysis, and optical analysis. Temperatures predicted using geometric and thermal math models are mapped to the structural finite element model in order to predict thermally-induced deformations. Motions and deformations at optical surfaces are input to optical models and optical performance is predicted using either an optical ray trace or WFE estimation techniques based on prior ray traces or first order optics. Following the discussion of the analysis process, results based on models representing the design at the time of the System Requirements Review. In addition to baseline performance predictions, sensitivity studies are performed to assess modeling uncertainties. Of particular interest is the sensitivity of optical performance to uncertainties in temperature predictions and variations in metal properties. The paper concludes with a discussion of modeling uncertainty as it pertains to STOP analysis.
Space tug economic analysis study. Volume 2: Tug concepts analysis. Part 2: Economic analysis
NASA Technical Reports Server (NTRS)
1972-01-01
An economic analysis of space tug operations is presented. The subjects discussed are: (1) cost uncertainties, (2) scenario analysis, (3) economic sensitivities, (4) mixed integer programming formulation of the space tug problem, and (5) critical parameters in the evaluation of a public expenditure.
NASA Astrophysics Data System (ADS)
Hartini, Entin; Andiwijayakusuma, Dinan
2014-09-01
This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuel type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartini, Entin, E-mail: entin@batan.go.id; Andiwijayakusuma, Dinan, E-mail: entin@batan.go.id
2014-09-30
This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuelmore » type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Nicholas R.; Mueller, Donald E.; Patton, Bruce W.
2016-08-31
Experiments are being planned at Research Centre Rež (RC Rež) to use the FLiBe (2 7LiF-BeF 2) salt from the Molten Salt Reactor Experiment (MSRE) to perform reactor physics measurements in the LR-0 low power nuclear reactor. These experiments are intended to inform on neutron spectral effects and nuclear data uncertainties for advanced reactor systems utilizing FLiBe salt in a thermal neutron energy spectrum. Oak Ridge National Laboratory (ORNL) is performing sensitivity/uncertainty (S/U) analysis of these planned experiments as part of the ongoing collaboration between the United States and the Czech Republic on civilian nuclear energy research and development. Themore » objective of these analyses is to produce the sensitivity of neutron multiplication to cross section data on an energy-dependent basis for specific nuclides. This report provides a status update on the S/U analyses of critical experiments at the LR-0 Reactor relevant to fluoride salt-cooled high temperature reactor (FHR) and liquid-fueled molten salt reactor (MSR) concepts. The S/U analyses will be used to inform design of FLiBe-based experiments using the salt from MSRE.« less
USEPA EXAMPLE EXIT LEVEL ANALYSIS RESULTS
Developed by NERL/ERD for the Office of Solid Waste, the enclosed product provides an example uncertainty analysis (UA) and initial process-based sensitivity analysis (SA) of hazardous waste "exit" concentrations for 7 chemicals and metals using the 3MRA Version 1.0 Modeling Syst...
Multiobjective Sensitivity Analysis Of Sediment And Nitrogen Processes With A Watershed Model
This paper presents a computational analysis for evaluating critical non-point-source sediment and nutrient (specifically nitrogen) processes and management actions at the watershed scale. In the analysis, model parameters that bear key uncertainties were presumed to reflect the ...
Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao
2015-01-01
There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants. Copyright © 2014 Elsevier Ltd. All rights reserved.
A design methodology for nonlinear systems containing parameter uncertainty
NASA Technical Reports Server (NTRS)
Young, G. E.; Auslander, D. M.
1983-01-01
In the present design methodology for nonlinear systems containing parameter uncertainty, a generalized sensitivity analysis is incorporated which employs parameter space sampling and statistical inference. For the case of a system with j adjustable and k nonadjustable parameters, this methodology (which includes an adaptive random search strategy) is used to determine the combination of j adjustable parameter values which maximize the probability of those performance indices which simultaneously satisfy design criteria in spite of the uncertainty due to k nonadjustable parameters.
Uncertainty Propagation in Hypersonic Vehicle Aerothermoelastic Analysis
NASA Astrophysics Data System (ADS)
Lamorte, Nicolas Etienne
Hypersonic vehicles face a challenging flight environment. The aerothermoelastic analysis of its components requires numerous simplifying approximations. Identifying and quantifying the effect of uncertainties pushes the limits of the existing deterministic models, and is pursued in this work. An uncertainty quantification framework is used to propagate the effects of identified uncertainties on the stability margins and performance of the different systems considered. First, the aeroelastic stability of a typical section representative of a control surface on a hypersonic vehicle is examined. Variability in the uncoupled natural frequencies of the system is modeled to mimic the effect of aerodynamic heating. Next, the stability of an aerodynamically heated panel representing a component of the skin of a generic hypersonic vehicle is considered. Uncertainty in the location of transition from laminar to turbulent flow and the heat flux prediction is quantified using CFD. In both cases significant reductions of the stability margins are observed. A loosely coupled airframe--integrated scramjet engine is considered next. The elongated body and cowl of the engine flow path are subject to harsh aerothermodynamic loading which causes it to deform. Uncertainty associated with deformation prediction is propagated to the engine performance analysis. The cowl deformation is the main contributor to the sensitivity of the propulsion system performance. Finally, a framework for aerothermoelastic stability boundary calculation for hypersonic vehicles using CFD is developed. The usage of CFD enables one to consider different turbulence conditions, laminar or turbulent, and different models of the air mixture, in particular real gas model which accounts for dissociation of molecules at high temperature. The system is found to be sensitive to turbulence modeling as well as the location of the transition from laminar to turbulent flow. Real gas effects play a minor role in the flight conditions considered. These studies demonstrate the advantages of accounting for uncertainty at an early stage of the analysis. They emphasize the important relation between heat flux modeling, thermal stresses and stability margins of hypersonic vehicles.
Uncertainty Analysis of Sonic Boom Levels Measured in a Simulator at NASA Langley
NASA Technical Reports Server (NTRS)
Rathsam, Jonathan; Ely, Jeffry W.
2012-01-01
A sonic boom simulator has been constructed at NASA Langley Research Center for testing the human response to sonic booms heard indoors. Like all measured quantities, sonic boom levels in the simulator are subject to systematic and random errors. To quantify these errors, and their net influence on the measurement result, a formal uncertainty analysis is conducted. Knowledge of the measurement uncertainty, or range of values attributable to the quantity being measured, enables reliable comparisons among measurements at different locations in the simulator as well as comparisons with field data or laboratory data from other simulators. The analysis reported here accounts for acoustic excitation from two sets of loudspeakers: one loudspeaker set at the facility exterior that reproduces the exterior sonic boom waveform and a second set of interior loudspeakers for reproducing indoor rattle sounds. The analysis also addresses the effect of pressure fluctuations generated when exterior doors of the building housing the simulator are opened. An uncertainty budget is assembled to document each uncertainty component, its sensitivity coefficient, and the combined standard uncertainty. The latter quantity will be reported alongside measurement results in future research reports to indicate data reliability.
Sabouri, Sarah; Gerber, Markus; Lemola, Sakari; Becker, Stephen P; Shamsi, Mahin; Shakouri, Zeinab; Sadeghi Bahmani, Dena; Kalak, Nadeem; Holsboer-Trachsler, Edith; Brand, Serge
2016-07-01
The Dark Triad (DT) describes a set of three closely related personality traits, Machiavellianism, narcissism, and psychopathy. The aim of this study was to examine the associations between DT traits, sleep disturbances, anxiety sensitivity and intolerance of uncertainty. A total of 341 adults (M=29years) completed a series of questionnaires related to the DT traits, sleep disturbances, anxiety sensitivity, and intolerance of uncertainty. A higher DT total score was associated with increased sleep disturbances, and higher scores for anxiety sensitivity and intolerance of uncertainty. In regression analyses Machiavellianism and psychopathy were predictors of sleep disturbances, anxiety sensitivity, and intolerance of uncertainty. Results indicate that specific DT traits, namely Machiavellianism and psychopathy, are associated with sleep disturbances, anxiety sensitivity and intolerance of uncertainty in young adults. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Murphy, Conor; Bastola, Satish; Sweeney, John
2013-04-01
Climate change impact and adaptation assessments have traditionally adopted a 'top-down' scenario based approach, where information from different Global Climate Models (GCMs) and emission scenarios are employed to develop impacts led adaptation strategies. Due to the tradeoffs in the computational cost and need to include a wide range of GCMs for fuller characterization of uncertainties, scenarios are better used for sensitivity testing and adaptation options appraisal. One common approach to adaptation that has been defined as robust is the use of safety margins. In this work the sensitivity of safety margins that have been adopted by the agency responsible for flood risk management in Ireland, to the uncertainty in future projections are examined. The sensitivity of fluvial flood risk to climate change is assessed for four Irish catchments using a large number of GCMs (17) forced with three emissions scenarios (SRES A1B, A2, B1) as input to four hydrological models. Both uncertainty within and between hydrological models is assessed using the GLUE framework. Regionalisation is achieved using a change factor method to infer changes in the parameters of a weather generator using monthly output from the GCMs, while flood frequency analysis is conducted using the method of probability weighted moments to fit the Generalised Extreme Value distribution to ~20,000 annual maxima series. The sensitivity of design margins to the uncertainty space considered is visualised using risk response surfaces. The hydrological sensitivity is measured as the percentage change in flood peak for specified recurrence intervals. Results indicate that there is a considerable residual risk associated with allowances of +20% when uncertainties are accounted for and that the risk of exceedence of design allowances is greatest for more extreme, low frequency events with considerable implication for critical infrastructure, e.g., culverts, bridges, flood defences whose designs are normally associated with such return periods. Sensitivity results show that the impact of climate change is not as great for flood peaks with higher return periods. The average width of the uncertainty range and the size of the range for each catchment reveals that the uncertainties in low frequency events are greater than high frequency events. In addition, the uncertainty interval, estimated as the average width of the uncertainty range of flow for the five return periods, grows wider with a decrease in the runoff coefficient and wetness index of each catchment, both of which tend to increase the nonlinearity in the rainfall response. A key management question that emerges is the acceptability of residual risk where high exposure of vulnerable populations and/or critical infrastructure coincide with high costs of additional capacity in safety margins.
Design sensitivity analysis with Applicon IFAD using the adjoint variable method
NASA Technical Reports Server (NTRS)
Frederick, Marjorie C.; Choi, Kyung K.
1984-01-01
A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.
Oddo, Perry C; Lee, Ben S; Garner, Gregory G; Srikrishnan, Vivek; Reed, Patrick M; Forest, Chris E; Keller, Klaus
2017-09-05
Sea levels are rising in many areas around the world, posing risks to coastal communities and infrastructures. Strategies for managing these flood risks present decision challenges that require a combination of geophysical, economic, and infrastructure models. Previous studies have broken important new ground on the considerable tensions between the costs of upgrading infrastructure and the damages that could result from extreme flood events. However, many risk-based adaptation strategies remain silent on certain potentially important uncertainties, as well as the tradeoffs between competing objectives. Here, we implement and improve on a classic decision-analytical model (Van Dantzig 1956) to: (i) capture tradeoffs across conflicting stakeholder objectives, (ii) demonstrate the consequences of structural uncertainties in the sea-level rise and storm surge models, and (iii) identify the parametric uncertainties that most strongly influence each objective using global sensitivity analysis. We find that the flood adaptation model produces potentially myopic solutions when formulated using traditional mean-centric decision theory. Moving from a single-objective problem formulation to one with multiobjective tradeoffs dramatically expands the decision space, and highlights the need for compromise solutions to address stakeholder preferences. We find deep structural uncertainties that have large effects on the model outcome, with the storm surge parameters accounting for the greatest impacts. Global sensitivity analysis effectively identifies important parameter interactions that local methods overlook, and that could have critical implications for flood adaptation strategies. © 2017 Society for Risk Analysis.
Predicting uncertainty in future marine ice sheet volume using Bayesian statistical methods
NASA Astrophysics Data System (ADS)
Davis, A. D.
2015-12-01
The marine ice instability can trigger rapid retreat of marine ice streams. Recent observations suggest that marine ice systems in West Antarctica have begun retreating. However, unknown ice dynamics, computationally intensive mathematical models, and uncertain parameters in these models make predicting retreat rate and ice volume difficult. In this work, we fuse current observational data with ice stream/shelf models to develop probabilistic predictions of future grounded ice sheet volume. Given observational data (e.g., thickness, surface elevation, and velocity) and a forward model that relates uncertain parameters (e.g., basal friction and basal topography) to these observations, we use a Bayesian framework to define a posterior distribution over the parameters. A stochastic predictive model then propagates uncertainties in these parameters to uncertainty in a particular quantity of interest (QoI)---here, the volume of grounded ice at a specified future time. While the Bayesian approach can in principle characterize the posterior predictive distribution of the QoI, the computational cost of both the forward and predictive models makes this effort prohibitively expensive. To tackle this challenge, we introduce a new Markov chain Monte Carlo method that constructs convergent approximations of the QoI target density in an online fashion, yielding accurate characterizations of future ice sheet volume at significantly reduced computational cost.Our second goal is to attribute uncertainty in these Bayesian predictions to uncertainties in particular parameters. Doing so can help target data collection, for the purpose of constraining the parameters that contribute most strongly to uncertainty in the future volume of grounded ice. For instance, smaller uncertainties in parameters to which the QoI is highly sensitive may account for more variability in the prediction than larger uncertainties in parameters to which the QoI is less sensitive. We use global sensitivity analysis to help answer this question, and make the computation of sensitivity indices computationally tractable using a combination of polynomial chaos and Monte Carlo techniques.
NASA Astrophysics Data System (ADS)
Watanabe, Kenichi; Minniti, Triestino; Kockelmann, Winfried; Dalgliesh, Robert; Burca, Genoveva; Tremsin, Anton S.
2017-07-01
The uncertainties and the stability of a neutron sensitive MCP/Timepix detector when operating in the event timing mode for quantitative image analysis at a pulsed neutron source were investigated. The dominant component to the uncertainty arises from the counting statistics. The contribution of the overlap correction to the uncertainty was concluded to be negligible from considerations based on the error propagation even if a pixel occupation probability is more than 50%. We, additionally, have taken into account the multiple counting effect in consideration of the counting statistics. Furthermore, the detection efficiency of this detector system changes under relatively high neutron fluxes due to the ageing effects of current Microchannel Plates. Since this efficiency change is position-dependent, it induces a memory image. The memory effect can be significantly reduced with correction procedures using the rate equations describing the permanent gain degradation and the scrubbing effect on the inner surfaces of the MCP pores.
NASA Astrophysics Data System (ADS)
Milne, Alice E.; Glendining, Margaret J.; Bellamy, Pat; Misselbrook, Tom; Gilhespy, Sarah; Rivas Casado, Monica; Hulin, Adele; van Oijen, Marcel; Whitmore, Andrew P.
2014-01-01
The UK's greenhouse gas inventory for agriculture uses a model based on the IPCC Tier 1 and Tier 2 methods to estimate the emissions of methane and nitrous oxide from agriculture. The inventory calculations are disaggregated at country level (England, Wales, Scotland and Northern Ireland). Before now, no detailed assessment of the uncertainties in the estimates of emissions had been done. We used Monte Carlo simulation to do such an analysis. We collated information on the uncertainties of each of the model inputs. The uncertainties propagate through the model and result in uncertainties in the estimated emissions. Using a sensitivity analysis, we found that in England and Scotland the uncertainty in the emission factor for emissions from N inputs (EF1) affected uncertainty the most, but that in Wales and Northern Ireland, the emission factor for N leaching and runoff (EF5) had greater influence. We showed that if the uncertainty in any one of these emission factors is reduced by 50%, the uncertainty in emissions of nitrous oxide reduces by 10%. The uncertainty in the estimate for the emissions of methane emission factors for enteric fermentation in cows and sheep most affected the uncertainty in methane emissions. When inventories are disaggregated (as that for the UK is) correlation between separate instances of each emission factor will affect the uncertainty in emissions. As more countries move towards inventory models with disaggregation, it is important that the IPCC give firm guidance on this topic.
Quantifying uncertainty and sensitivity in sea ice models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego Blanco, Jorge Rolando; Hunke, Elizabeth Clare; Urban, Nathan Mark
The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Nicholas; Burns, Joseph R.
The aftermath of the Tōhoku earthquake and the Fukushima accident has led to a global push to improve the safety of existing light water reactors. A key component of this initiative is the development of nuclear fuel and cladding materials with potentially enhanced accident tolerance, also known as accident-tolerant fuels (ATF). These materials are intended to improve core fuel and cladding integrity under beyond design basis accident conditions while maintaining or enhancing reactor performance and safety characteristics during normal operation. To complement research that has already been carried out to characterize ATF neutronics, the present study provides an initial investigationmore » of the sensitivity and uncertainty of ATF systems responses to nuclear cross section data. ATF concepts incorporate novel materials, including SiC and FeCrAl cladding and high density uranium silicide composite fuels, in turn introducing new cross section sensitivities and uncertainties which may behave differently from traditional fuel and cladding materials. In this paper, we conducted sensitivity and uncertainty analysis using the TSUNAMI-2D sequence of SCALE with infinite lattice models of ATF assemblies. Of all the ATF materials considered, it is found that radiative capture in 56Fe in FeCrAl cladding is the most significant contributor to eigenvalue uncertainty. 56Fe yields significant potential eigenvalue uncertainty associated with its radiative capture cross section; this is by far the largest ATF-specific uncertainty found in these cases, exceeding even those of uranium. We found that while significant new sensitivities indeed arise, the general sensitivity behavior of ATF assemblies does not markedly differ from traditional UO2/zirconium-based fuel/cladding systems, especially with regard to uncertainties associated with uranium. We assessed the similarity of the IPEN/MB-01 reactor benchmark model to application models with FeCrAl cladding. We used TSUNAMI-IP to calculate similarity indices of the application model and IPEN/MB-01 reactor benchmark model. This benchmark was selected for its use of SS304 as a cladding and structural material, with significant 56Fe content. The similarity indices suggest that while many differences in reactor physics arise from differences in design, sensitivity to and behavior of 56Fe absorption is comparable between systems, thus indicating the potential for this benchmark to reduce uncertainties in 56Fe radiative capture cross sections.« less
Traas, T P; Luttik, R; Jongbloed, R H
1996-08-01
In previous studies, the risk of toxicant accumulation in food chains was used to calculate quality criteria for surface water and soil. A simple algorithm was used to calculate maximum permissable concentrations [MPC = no-observed-effect concentration/bioconcentration factor(NOEC/BCF)]. These studies were limited to simple food chains. This study presents a method to calculate MPCs for more complex food webs of predators. The previous method is expanded. First, toxicity data (NOECs) for several compounds were corrected for differences between laboratory animals and animals in the wild. Second, for each compound, it was assumed these NOECs were a sample of a log-logistic distribution of mammalian and avian NOECs. Third, bioaccumulation factors (BAFs) for major food items of predators were collected and were assumed to derive from different log-logistic distributions of BAFs. Fourth, MPCs for each compound were calculated using Monte Carlo sampling from NOEC and BAF distributions. An uncertainty analysis for cadmium was performed to identify the most uncertain parameters of the model. Model analysis indicated that most of the prediction uncertainty of the model can be ascribed to uncertainty of species sensitivity as expressed by NOECs. A very small proportion of model uncertainty is contributed by BAFs from food webs. Correction factors for the conversion of NOECs from laboratory conditions to the field have some influence on the final value of MPC5, but the total prediction uncertainty of the MPC is quite large. It is concluded that the uncertainty in species sensitivity is quite large. To avoid unethical toxicity testing with mammalian or avian predators, it cannot be avoided to use this uncertainty in the method proposed to calculate MPC distributions. The fifth percentile of the MPC is suggested as a safe value for top predators.
Analysing uncertainties of supply and demand in the future use of hydrogen as an energy vector
NASA Astrophysics Data System (ADS)
Lenel, U. R.; Davies, D. G. S.; Moore, M. A.
An analytical technique (Analysis with Uncertain Qualities), developed at Fulmer, is being used to examine the sensitivity of the outcome to uncertainties in input quantities in order to highlight which input quantities critically affect the potential role of hydrogen. The work presented here includes an outline of the model and the analysis technique, along with basic considerations of the input quantities to the model (demand, supply and constraints). Some examples are given of probabilistic estimates of input quantities.
JUPITER PROJECT - JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY
The JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) project builds on the technology of two widely used codes for sensitivity analysis, data assessment, calibration, and uncertainty analysis of environmental models: PEST and UCODE.
JUPITER PROJECT - MERGING INVERSE PROBLEM FORMULATION TECHNOLOGIES
The JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) project seeks to enhance and build on the technology and momentum behind two of the most popular sensitivity analysis, data assessment, calibration, and uncertainty analysis programs used in envi...
Ngonghala, Calistus N; Teboh-Ewungkem, Miranda I; Ngwa, Gideon A
2015-06-01
We derive and study a deterministic compartmental model for malaria transmission with varying human and mosquito populations. Our model considers disease-related deaths, asymptomatic immune humans who are also infectious, as well as mosquito demography, reproduction and feeding habits. Analysis of the model reveals the existence of a backward bifurcation and persistent limit cycles whose period and size is determined by two threshold parameters: the vectorial basic reproduction number Rm, and the disease basic reproduction number R0, whose size can be reduced by reducing Rm. We conclude that malaria dynamics are indeed oscillatory when the methodology of explicitly incorporating the mosquito's demography, feeding and reproductive patterns is considered in modeling the mosquito population dynamics. A sensitivity analysis reveals important control parameters that can affect the magnitudes of Rm and R0, threshold quantities to be taken into consideration when designing control strategies. Both Rm and the intrinsic period of oscillation are shown to be highly sensitive to the mosquito's birth constant λm and the mosquito's feeding success probability pw. Control of λm can be achieved by spraying, eliminating breeding sites or moving them away from human habitats, while pw can be controlled via the use of mosquito repellant and insecticide-treated bed-nets. The disease threshold parameter R0 is shown to be highly sensitive to pw, and the intrinsic period of oscillation is also sensitive to the rate at which reproducing mosquitoes return to breeding sites. A global sensitivity and uncertainty analysis reveals that the ability of the mosquito to reproduce and uncertainties in the estimations of the rates at which exposed humans become infectious and infectious humans recover from malaria are critical in generating uncertainties in the disease classes.
Fuel cycle cost uncertainty from nuclear fuel cycle comparison
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, J.; McNelis, D.; Yim, M.S.
2013-07-01
This paper examined the uncertainty in fuel cycle cost (FCC) calculation by considering both model and parameter uncertainty. Four different fuel cycle options were compared in the analysis including the once-through cycle (OT), the DUPIC cycle, the MOX cycle and a closed fuel cycle with fast reactors (FR). The model uncertainty was addressed by using three different FCC modeling approaches with and without the time value of money consideration. The relative ratios of FCC in comparison to OT did not change much by using different modeling approaches. This observation was consistent with the results of the sensitivity study for themore » discount rate. Two different sets of data with uncertainty range of unit costs were used to address the parameter uncertainty of the FCC calculation. The sensitivity study showed that the dominating contributor to the total variance of FCC is the uranium price. In general, the FCC of OT was found to be the lowest followed by FR, MOX, and DUPIC. But depending on the uranium price, the FR cycle was found to have lower FCC over OT. The reprocessing cost was also found to have a major impact on FCC.« less
NASA Astrophysics Data System (ADS)
Fabianová, Jana; Kačmáry, Peter; Molnár, Vieroslav; Michalik, Peter
2016-10-01
Forecasting is one of the logistics activities and a sales forecast is the starting point for the elaboration of business plans. Forecast accuracy affects the business outcomes and ultimately may significantly affect the economic stability of the company. The accuracy of the prediction depends on the suitability of the use of forecasting methods, experience, quality of input data, time period and other factors. The input data are usually not deterministic but they are often of random nature. They are affected by uncertainties of the market environment, and many other factors. Taking into account the input data uncertainty, the forecast error can by reduced. This article deals with the use of the software tool for incorporating data uncertainty into forecasting. Proposals are presented of a forecasting approach and simulation of the impact of uncertain input parameters to the target forecasted value by this case study model. The statistical analysis and risk analysis of the forecast results is carried out including sensitivity analysis and variables impact analysis.
Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V; Petway, Joy R
2017-07-12
This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH₃-N and NO₃-N. Results indicate that the integrated FME-GLUE-based model, with good Nash-Sutcliffe coefficients (0.53-0.69) and correlation coefficients (0.76-0.83), successfully simulates the concentrations of ON-N, NH₃-N and NO₃-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH₃-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO₃-N simulation, which was measured using global sensitivity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
FINSTERLE, STEFAN; JUNG, YOOJIN; KOWALSKY, MICHAEL
2016-09-15
iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional, multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. iTOUGH2 performs sensitivity analyses, data-worth analyses, parameter estimation, and uncertainty propagation analyses in geosciences and reservoir engineering and other application areas. iTOUGH2 supports a number of different combinations of fluids and components (equation-of-state (EOS) modules). In addition, the optimization routines implemented in iTOUGH2 can also be used for sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files using the PEST protocol. iTOUGH2 solves the inverse problem bymore » minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative-free, gradient-based, and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlo simulations for uncertainty propagation analyses. A detailed residual and error analysis is provided. This upgrade includes (a) global sensitivity analysis methods, (b) dynamic memory allocation (c) additional input features and output analyses, (d) increased forward simulation capabilities, (e) parallel execution on multicore PCs and Linux clusters, and (f) bug fixes. More details can be found at http://esd.lbl.gov/iTOUGH2.« less
Adjoint-Based Uncertainty Quantification with MCNP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seifried, Jeffrey E.
2011-09-01
This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence inmore » the simulation is acquired.« less
Sánchez-Canales, M; López-Benito, A; Acuña, V; Ziv, G; Hamel, P; Chaplin-Kramer, R; Elorza, F J
2015-01-01
Climate change and land-use change are major factors influencing sediment dynamics. Models can be used to better understand sediment production and retention by the landscape, although their interpretation is limited by large uncertainties, including model parameter uncertainties. The uncertainties related to parameter selection may be significant and need to be quantified to improve model interpretation for watershed management. In this study, we performed a sensitivity analysis of the InVEST (Integrated Valuation of Environmental Services and Tradeoffs) sediment retention model in order to determine which model parameters had the greatest influence on model outputs, and therefore require special attention during calibration. The estimation of the sediment loads in this model is based on the Universal Soil Loss Equation (USLE). The sensitivity analysis was performed in the Llobregat basin (NE Iberian Peninsula) for exported and retained sediment, which support two different ecosystem service benefits (avoided reservoir sedimentation and improved water quality). Our analysis identified the model parameters related to the natural environment as the most influential for sediment export and retention. Accordingly, small changes in variables such as the magnitude and frequency of extreme rainfall events could cause major changes in sediment dynamics, demonstrating the sensitivity of these dynamics to climate change in Mediterranean basins. Parameters directly related to human activities and decisions (such as cover management factor, C) were also influential, especially for sediment exported. The importance of these human-related parameters in the sediment export process suggests that mitigation measures have the potential to at least partially ameliorate climate-change driven changes in sediment exportation. Copyright © 2014 Elsevier B.V. All rights reserved.
2015-03-16
shaded region around each total sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity...Performance We conducted a global sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the...Hunter, J.D. Matplotlib: A 2D Graphics Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGraw, David; Hershey, Ronald L.
Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries.more » The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little variation in source-water fraction between the deterministic and Monte Carlo approaches, and therefore, little variation in travel times between approaches. Sensitivity analysis proved very useful for identifying the most important input constraints (dissolved-ion concentrations), which can reveal the variables that have the most influence on source-water fractions and carbon-14 travel times. Once these variables are determined, more focused effort can be applied to determining the proper distribution for each constraint. Second, Monte Carlo results for water-rock reaction modeling showed discrete and nonunique results. The NETPATH models provide the solutions that satisfy the constraints of upgradient and downgradient water chemistry. There can exist multiple, discrete solutions for any scenario and these discrete solutions cause grouping of results. As a result, the variability in output may not easily be represented by a single distribution or a mean and variance and care should be taken in the interpretation and reporting of results.« less
New features and improved uncertainty analysis in the NEA nuclear data sensitivity tool (NDaST)
NASA Astrophysics Data System (ADS)
Dyrda, J.; Soppera, N.; Hill, I.; Bossant, M.; Gulliford, J.
2017-09-01
Following the release and initial testing period of the NEA's Nuclear Data Sensitivity Tool [1], new features have been designed and implemented in order to expand its uncertainty analysis capabilities. The aim is to provide a free online tool for integral benchmark testing, that is both efficient and comprehensive, meeting the needs of the nuclear data and benchmark testing communities. New features include access to P1 sensitivities for neutron scattering angular distribution [2] and constrained Chi sensitivities for the prompt fission neutron energy sampling. Both of these are compatible with covariance data accessed via the JANIS nuclear data software, enabling propagation of the resultant uncertainties in keff to a large series of integral experiment benchmarks. These capabilities are available using a number of different covariance libraries e.g., ENDF/B, JEFF, JENDL and TENDL, allowing comparison of the broad range of results it is possible to obtain. The IRPhE database of reactor physics measurements is now also accessible within the tool in addition to the criticality benchmarks from ICSBEP. Other improvements include the ability to determine and visualise the energy dependence of a given calculated result in order to better identify specific regions of importance or high uncertainty contribution. Sorting and statistical analysis of the selected benchmark suite is now also provided. Examples of the plots generated by the software are included to illustrate such capabilities. Finally, a number of analytical expressions, for example Maxwellian and Watt fission spectra will be included. This will allow the analyst to determine the impact of varying such distributions within the data evaluation, either through adjustment of parameters within the expressions, or by comparison to a more general probability distribution fitted to measured data. The impact of such changes is verified through calculations which are compared to a `direct' measurement found by adjustment of the original ENDF format file.
NASA Astrophysics Data System (ADS)
Döpking, Sandra; Plaisance, Craig P.; Strobusch, Daniel; Reuter, Karsten; Scheurer, Christoph; Matera, Sebastian
2018-01-01
In the last decade, first-principles-based microkinetic modeling has been developed into an important tool for a mechanistic understanding of heterogeneous catalysis. A commonly known, but hitherto barely analyzed issue in this kind of modeling is the presence of sizable errors from the use of approximate Density Functional Theory (DFT). We here address the propagation of these errors to the catalytic turnover frequency (TOF) by global sensitivity and uncertainty analysis. Both analyses require the numerical quadrature of high-dimensional integrals. To achieve this efficiently, we utilize and extend an adaptive sparse grid approach and exploit the confinement of the strongly non-linear behavior of the TOF to local regions of the parameter space. We demonstrate the methodology on a model of the oxygen evolution reaction at the Co3O4 (110)-A surface, using a maximum entropy error model that imposes nothing but reasonable bounds on the errors. For this setting, the DFT errors lead to an absolute uncertainty of several orders of magnitude in the TOF. We nevertheless find that it is still possible to draw conclusions from such uncertain models about the atomistic aspects controlling the reactivity. A comparison with derivative-based local sensitivity analysis instead reveals that this more established approach provides incomplete information. Since the adaptive sparse grids allow for the evaluation of the integrals with only a modest number of function evaluations, this approach opens the way for a global sensitivity analysis of more complex models, for instance, models based on kinetic Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Chen, Zhuowei; Shi, Liangsheng; Ye, Ming; Zhu, Yan; Yang, Jinzhong
2018-06-01
Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. By using a new variance-based global sensitivity analysis method, this paper identifies important parameters for nitrogen reactive transport with simultaneous consideration of these three uncertainties. A combination of three scenarios of soil temperature and two scenarios of soil moisture creates a total of six scenarios. Four alternative models describing the effect of soil temperature and moisture content are used to evaluate the reduction functions used for calculating actual reaction rates. The results show that for nitrogen reactive transport problem, parameter importance varies substantially among different models and scenarios. Denitrification and nitrification process is sensitive to soil moisture content status rather than to the moisture function parameter. Nitrification process becomes more important at low moisture content and low temperature. However, the changing importance of nitrification activity with respect to temperature change highly relies on the selected model. Model-averaging is suggested to assess the nitrification (or denitrification) contribution by reducing the possible model error. Despite the introduction of biochemical heterogeneity or not, fairly consistent parameter importance rank is obtained in this study: optimal denitrification rate (Kden) is the most important parameter; reference temperature (Tr) is more important than temperature coefficient (Q10); empirical constant in moisture response function (m) is the least important one. Vertical distribution of soil moisture but not temperature plays predominant role controlling nitrogen reaction. This study provides insight into the nitrogen reactive transport modeling and demonstrates an effective strategy of selecting the important parameters when future temperature and soil moisture carry uncertainties or when modelers face with multiple ways of establishing nitrogen models.
Uncertainty in BRCA1 cancer susceptibility testing.
Baty, Bonnie J; Dudley, William N; Musters, Adrian; Kinney, Anita Y
2006-11-15
This study investigated uncertainty in individuals undergoing genetic counseling/testing for breast/ovarian cancer susceptibility. Sixty-three individuals from a single kindred with a known BRCA1 mutation rated uncertainty about 12 items on a five-point Likert scale before and 1 month after genetic counseling/testing. Factor analysis identified a five-item total uncertainty scale that was sensitive to changes before and after testing. The items in the scale were related to uncertainty about obtaining health care, positive changes after testing, and coping well with results. The majority of participants (76%) rated reducing uncertainty as an important reason for genetic testing. The importance of reducing uncertainty was stable across time and unrelated to anxiety or demographics. Yet, at baseline, total uncertainty was low and decreased after genetic counseling/testing (P = 0.004). Analysis of individual items showed that after genetic counseling/testing, there was less uncertainty about the participant detecting cancer early (P = 0.005) and coping well with their result (P < 0.001). Our findings support the importance to clients of genetic counseling/testing as a means of reducing uncertainty. Testing may help clients to reduce the uncertainty about items they can control, and it may be important to differentiate the sources of uncertainty that are more or less controllable. Genetic counselors can help clients by providing anticipatory guidance about the role of uncertainty in genetic testing. (c) 2006 Wiley-Liss, Inc.
Probabilistic Structural Evaluation of Uncertainties in Radiator Sandwich Panel Design
NASA Technical Reports Server (NTRS)
Kuguoglu, Latife; Ludwiczak, Damian
2006-01-01
The Jupiter Icy Moons Orbiter (JIMO) Space System is part of the NASA's Prometheus Program. As part of the JIMO engineering team at NASA Glenn Research Center, the structural design of the JIMO Heat Rejection Subsystem (HRS) is evaluated. An initial goal of this study was to perform sensitivity analyses to determine the relative importance of the input variables on the structural responses of the radiator panel. The desire was to let the sensitivity analysis information identify the important parameters. The probabilistic analysis methods illustrated here support this objective. The probabilistic structural performance evaluation of a HRS radiator sandwich panel was performed. The radiator panel structural performance was assessed in the presence of uncertainties in the loading, fabrication process variables, and material properties. The stress and displacement contours of the deterministic structural analysis at mean probability was performed and results presented. It is followed by a probabilistic evaluation to determine the effect of the primitive variables on the radiator panel structural performance. Based on uncertainties in material properties, structural geometry and loading, the results of the displacement and stress analysis are used as an input file for the probabilistic analysis of the panel. The sensitivity of the structural responses, such as maximum displacement and maximum tensile and compressive stresses of the facesheet in x and y directions and maximum VonMises stresses of the tube, to the loading and design variables is determined under the boundary condition where all edges of the radiator panel are pinned. Based on this study, design critical material and geometric parameters of the considered sandwich panel are identified.
Uncertainty in Damage Detection, Dynamic Propagation and Just-in-Time Networks
2015-08-03
estimated parameter uncertainty in dynamic data sets; high order compact finite difference schemes for Helmholtz equations with discontinuous wave numbers...delay differential equations with a Gamma distributed delay. We found that with the same population size the histogram plots for the solution to the...schemes for Helmholtz equations with discontinuous wave numbers across interfaces. • We carried out numerical sensitivity analysis with respect to
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conant, Andrew; Erickson, Anna; Robel, Martin
Nuclear forensics has a broad task to characterize recovered nuclear or radiological material and interpret the results of investigation. One approach to isotopic characterization of nuclear material obtained from a reactor is to chemically separate and perform isotopic measurements on the sample and verify the results with modeling of the sample history, for example, operation of a nuclear reactor. The major actinide plutonium and fission product cesium are commonly measured signatures of the fuel history in a reactor core. This study investigates the uncertainty of the plutonium and cesium isotope ratios of a fuel rod discharged from a research pressurizedmore » water reactor when the location of the sample is not known a priori. A sensitivity analysis showed overpredicted values for the 240Pu/ 239Pu ratio toward the axial center of the rod and revealed a lower probability of the rod of interest (ROI) being on the periphery of the assembly. The uncertainty analysis found the relative errors due to only the rod position and boron concentration to be 17% to 36% and 7% to 15% for the 240Pu/ 239Pu and 137Cs/ 135Cs ratios, respectively. Lastly, this study provides a method for uncertainty quantification of isotope concentrations due to the location of the ROI. Similar analyses can be performed to verify future chemical and isotopic analyses.« less
Conant, Andrew; Erickson, Anna; Robel, Martin; ...
2017-02-03
Nuclear forensics has a broad task to characterize recovered nuclear or radiological material and interpret the results of investigation. One approach to isotopic characterization of nuclear material obtained from a reactor is to chemically separate and perform isotopic measurements on the sample and verify the results with modeling of the sample history, for example, operation of a nuclear reactor. The major actinide plutonium and fission product cesium are commonly measured signatures of the fuel history in a reactor core. This study investigates the uncertainty of the plutonium and cesium isotope ratios of a fuel rod discharged from a research pressurizedmore » water reactor when the location of the sample is not known a priori. A sensitivity analysis showed overpredicted values for the 240Pu/ 239Pu ratio toward the axial center of the rod and revealed a lower probability of the rod of interest (ROI) being on the periphery of the assembly. The uncertainty analysis found the relative errors due to only the rod position and boron concentration to be 17% to 36% and 7% to 15% for the 240Pu/ 239Pu and 137Cs/ 135Cs ratios, respectively. Lastly, this study provides a method for uncertainty quantification of isotope concentrations due to the location of the ROI. Similar analyses can be performed to verify future chemical and isotopic analyses.« less
NASA Astrophysics Data System (ADS)
Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.
2017-12-01
The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40-85% reduction in 1-NSE, and 35-90% reduction in |RB|. Overall, this uncertainty quantification framework is robust, effective and efficient for parametric uncertainty analysis, the results of which provide useful information that helps to understand the model behaviors and improve the model simulations.
Barrett, K. E.; Ellis, K. D.; Glass, C. R.; ...
2015-12-01
The goal of the Accident Tolerant Fuel (ATF) program is to develop the next generation of Light Water Reactor (LWR) fuels with improved performance, reliability, and safety characteristics during normal operations and accident conditions and with reduced waste generation. An irradiation test series has been defined to assess the performance of proposed ATF concepts under normal LWR operating conditions. The Phase I ATF irradiation test series is planned to be performed as a series of drop-in capsule tests to be irradiated in the Advanced Test Reactor (ATR) operated by the Idaho National Laboratory (INL). Design, analysis, and fabrication processes formore » ATR drop-in capsule experiment preparation are presented in this paper to demonstrate the importance of special design considerations, parameter sensitivity analysis, and precise fabrication and inspection techniques for figure innovative materials used in ATF experiment assemblies. A Taylor Series Method sensitivity analysis approach was used to identify the most critical variables in cladding and rodlet stress, temperature, and pressure calculations for design analyses. The results showed that internal rodlet pressure calculations are most sensitive to the fission gas release rate uncertainty while temperature calculations are most sensitive to cladding I.D. and O.D. dimensional uncertainty. The analysis showed that stress calculations are most sensitive to rodlet internal pressure uncertainties, however the results also indicated that the inside radius, outside radius, and internal pressure were all magnified as they propagate through the stress equation. This study demonstrates the importance for ATF concept development teams to provide the fabricators as much information as possible about the material properties and behavior observed in prototype testing, mock-up fabrication and assembly, and chemical and mechanical testing of the materials that may have been performed in the concept development phase. Special handling, machining, welding, and inspection of materials, if known, should also be communicated to the experiment fabrication and inspection team.« less
The Volatility of Data Space: Topology Oriented Sensitivity Analysis
Du, Jing; Ligmann-Zielinska, Arika
2015-01-01
Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929
Should radon be reduced in homes? A cost-effect analysis.
Stigum, Hein; Strand, Terje; Magnus, Per
2003-02-01
Radon is a radioactive gas that may leak into buildings from the ground. Radon exposure is a risk factor for lung cancer. An intervention against radon exposure in homes may consist of locating homes with high radon exposure (above 200 Bq m(-3)) and improving these, and protecting future houses. The purpose of this paper is to calculate the costs and the effects of this intervention. We performed a cost-effect analysis from the perspective of the society, followed by an uncertainty and sensitivity analysis. The distribution of radon levels in Norwegian homes is lognormal with mean = 74.5 Bq m(-3), and 7.6% above 200 Bq m(-3). The preventable attributable fraction of radon on lung cancer was 3.8% (95% uncertainty interval: 0.6%, 8.3%). In cumulative present values the intervention would cost $238 (145, 310) million and save 892 (133, 1981) lives; each life saved costs $0.27 (0.09, 0.9) million. The cost-effect ratio was sensitive to the radon risk, the radon exposure distribution, and the latency period of lung cancer. Together these three parameters explained 90% of the variation in the cost-effect ratio. The uncertainty in the estimated cost per life is large, mainly due to uncertainty in the risk of lung cancer from radon. Based on estimates from road construction, the Norwegian society has been willing to pay $1 million to save a life. This is above the upper uncertainty limit of the cost per life. The intervention against radon in homes, therefore, seems justifiable.
Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques
NASA Astrophysics Data System (ADS)
Mai, Juliane; Tolson, Bryan
2017-04-01
The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. Subsequently, we focus on the model-independency by testing the frugal method using the hydrologic model mHM (www.ufz.de/mhm) with about 50 model parameters. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed (and published) sensitivity results. This is one step towards reliable and transferable, published sensitivity results.
NASA Astrophysics Data System (ADS)
Wang, Xu; Bi, Fengrong; Du, Haiping
2018-05-01
This paper aims to develop an 5-degree-of-freedom driver and seating system model for optimal vibration control. A new method for identification of the driver seating system parameters from experimental vibration measurement has been developed. The parameter sensitivity analysis has been conducted considering the random excitation frequency and system parameter uncertainty. The most and least sensitive system parameters for the transmissibility ratio have been identified. The optimised PID controllers have been developed to reduce the driver's body vibration.
Uncertainty in temperature response of current consumption-based emissions estimates
NASA Astrophysics Data System (ADS)
Karstensen, J.; Peters, G. P.; Andrew, R. M.
2014-09-01
Several studies have connected emissions of greenhouse gases to economic and trade data to quantify the causal chain from consumption to emissions and climate change. These studies usually combine data and models originating from different sources, making it difficult to estimate uncertainties in the end results. We estimate uncertainties in economic data, multi-pollutant emission statistics and metric parameters, and use Monte Carlo analysis to quantify contributions to uncertainty and to determine how uncertainty propagates to estimates of global temperature change from regional and sectoral territorial- and consumption-based emissions for the year 2007. We find that the uncertainties are sensitive to the emission allocations, mix of pollutants included, the metric and its time horizon, and the level of aggregation of the results. Uncertainties in the final results are largely dominated by the climate sensitivity and the parameters associated with the warming effects of CO2. The economic data have a relatively small impact on uncertainty at the global and national level, while much higher uncertainties are found at the sectoral level. Our results suggest that consumption-based national emissions are not significantly more uncertain than the corresponding production based emissions, since the largest uncertainties are due to metric and emissions which affect both perspectives equally. The two perspectives exhibit different sectoral uncertainties, due to changes of pollutant compositions. We find global sectoral consumption uncertainties in the range of ±9-±27% using the global temperature potential with a 50 year time horizon, with metric uncertainties dominating. National level uncertainties are similar in both perspectives due to the dominance of CO2 over other pollutants. The consumption emissions of the top 10 emitting regions have a broad uncertainty range of ±9-±25%, with metric and emissions uncertainties contributing similarly. The Absolute global temperature potential with a 50 year time horizon has much higher uncertainties, with considerable uncertainty overlap for regions and sectors, indicating that the ranking of countries is uncertain.
NASA Astrophysics Data System (ADS)
Tavakoli, S.; Poslad, S.; Fruhwirth, R.; Winter, M.
2012-04-01
This paper introduces an application of a novel EventTracker platform for instantaneous Sensitivity Analysis (SA) of large scale real-time geo-information. Earth disaster management systems demand high quality information to aid a quick and timely response to their evolving environments. The idea behind the proposed EventTracker platform is the assumption that modern information management systems are able to capture data in real-time and have the technological flexibility to adjust their services to work with specific sources of data/information. However, to assure this adaptation in real time, the online data should be collected, interpreted, and translated into corrective actions in a concise and timely manner. This can hardly be handled by existing sensitivity analysis methods because they rely on historical data and lazy processing algorithms. In event-driven systems, the effect of system inputs on its state is of value, as events could cause this state to change. This 'event triggering' situation underpins the logic of the proposed approach. Event tracking sensitivity analysis method describes the system variables and states as a collection of events. The higher the occurrence of an input variable during the trigger of event, the greater its potential impact will be on the final analysis of the system state. Experiments were designed to compare the proposed event tracking sensitivity analysis with existing Entropy-based sensitivity analysis methods. The results have shown a 10% improvement in a computational efficiency with no compromise for accuracy. It has also shown that the computational time to perform the sensitivity analysis is 0.5% of the time required compared to using the Entropy-based method. The proposed method has been applied to real world data in the context of preventing emerging crises at drilling rigs. One of the major purposes of such rigs is to drill boreholes to explore oil or gas reservoirs with the final scope of recovering the content of such reservoirs; both in onshore regions as well as in offshore regions. Drilling a well is always guided by technical, economic and security constraints to prevent crew, equipment and environment from injury, damage and pollution. Although risk assessment and local practice provides a high degree of security, uncertainty is given by the behaviour of the formation which may cause crucial situations at the rig. To overcome such uncertainties real-time sensor measurements form a base to predict and thus prevent such crises, the proposed method supports the identification of the data necessary for that.
Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph
2011-12-01
The reliability of biokinetic models is essential in internal dose assessments and radiation risk analysis for the public, occupational workers, and patients exposed to radionuclides. In this paper, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. The paper is divided into two parts. In the first part of the study published here, the uncertainty sources of the model parameters for zirconium (Zr), developed by the International Commission on Radiological Protection (ICRP), were identified and analyzed. Furthermore, the uncertainty of the biokinetic experimental measurement performed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU) for developing a new biokinetic model of Zr was analyzed according to the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. The confidence interval and distribution of model parameters of the ICRP and HMGU Zr biokinetic models were evaluated. As a result of computer biokinetic modelings, the mean, standard uncertainty, and confidence interval of model prediction calculated based on the model parameter uncertainty were presented and compared to the plasma clearance and urinary excretion measured after intravenous administration. It was shown that for the most important compartment, the plasma, the uncertainty evaluated for the HMGU model was much smaller than that for the ICRP model; that phenomenon was observed for other organs and tissues as well. The uncertainty of the integral of the radioactivity of Zr up to 50 y calculated by the HMGU model after ingestion by adult members of the public was shown to be smaller by a factor of two than that of the ICRP model. It was also shown that the distribution type of the model parameter strongly influences the model prediction, and the correlation of the model input parameters affects the model prediction to a certain extent depending on the strength of the correlation. In the case of model prediction, the qualitative comparison of the model predictions with the measured plasma and urinary data showed the HMGU model to be more reliable than the ICRP model; quantitatively, the uncertainty model prediction by the HMGU systemic biokinetic model is smaller than that of the ICRP model. The uncertainty information on the model parameters analyzed in this study was used in the second part of the paper regarding a sensitivity analysis of the Zr biokinetic models.
Wesolowski, Edwin A.
1996-01-01
Two separate studies to simulate the effects of discharging treated wastewater to the Red River of the North at Fargo, North Dakota, and Moorhead, Minnesota, have been completed. In the first study, the Red River at Fargo Water-Quality Model was calibrated and verified for icefree conditions. In the second study, the Red River at Fargo Ice-Cover Water-Quality Model was verified for ice-cover conditions.To better understand and apply the Red River at Fargo Water-Quality Model and the Red River at Fargo Ice-Cover Water-Quality Model, the uncertainty associated with simulated constituent concentrations and property values was analyzed and quantified using the Enhanced Stream Water Quality Model-Uncertainty Analysis. The Monte Carlo simulation and first-order error analysis methods were used to analyze the uncertainty in simulated values for six constituents and properties at sites 5, 10, and 14 (upstream to downstream order). The constituents and properties analyzed for uncertainty are specific conductance, total organic nitrogen (reported as nitrogen), total ammonia (reported as nitrogen), total nitrite plus nitrate (reported as nitrogen), 5-day carbonaceous biochemical oxygen demand for ice-cover conditions and ultimate carbonaceous biochemical oxygen demand for ice-free conditions, and dissolved oxygen. Results are given in detail for both the ice-cover and ice-free conditions for specific conductance, total ammonia, and dissolved oxygen.The sensitivity and uncertainty of the simulated constituent concentrations and property values to input variables differ substantially between ice-cover and ice-free conditions. During ice-cover conditions, simulated specific-conductance values are most sensitive to the headwatersource specific-conductance values upstream of site 10 and the point-source specific-conductance values downstream of site 10. These headwater-source and point-source specific-conductance values also are the key sources of uncertainty. Simulated total ammonia concentrations are most sensitive to the point-source total ammonia concentrations at all three sites. Other input variables that contribute substantially to the variability of simulated total ammonia concentrations are the headwater-source total ammonia and the instream reaction coefficient for biological decay of total ammonia to total nitrite. Simulated dissolved-oxygen concentrations at all three sites are most sensitive to headwater-source dissolved-oxygen concentration. This input variable is the key source of variability for simulated dissolved-oxygen concentrations at sites 5 and 10. Headwatersource and point-source dissolved-oxygen concentrations are the key sources of variability for simulated dissolved-oxygen concentrations at site 14.During ice-free conditions, simulated specific-conductance values at all three sites are most sensitive to the headwater-source specific-conductance values. Headwater-source specificconductance values also are the key source of uncertainty. The input variables to which total ammonia and dissolved oxygen are most sensitive vary from site to site and may or may not correspond to the input variables that contribute the most to the variability. The input variables that contribute the most to the variability of simulated total ammonia concentrations are pointsource total ammonia, instream reaction coefficient for biological decay of total ammonia to total nitrite, and Manning's roughness coefficient. The input variables that contribute the most to the variability of simulated dissolved-oxygen concentrations are reaeration rate, sediment oxygen demand rate, and headwater-source algae as chlorophyll a.
B-value and slip rate sensitivity analysis for PGA value in Lembang fault and Cimandiri fault area
NASA Astrophysics Data System (ADS)
Pratama, Cecep; Ito, Takeo; Meilano, Irwan; Nugraha, Andri Dian
2017-07-01
We examine slip rate and b-value contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedence in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi and Bandung using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Uncertainty and coefficient of variation from slip rate and b-value in Lembang and Cimandiri Fault area have been calculated. We observe that seismic hazard estimates are sensitive to fault slip rate and b-value with uncertainty result are 0.25 g dan 0.1-0.2 g, respectively. For specific site, we found seismic hazard estimate are 0.49 + 0.13 g with COV 27% and 0.39 + 0.05 g with COV 13% for Sukabumi and Bandung, respectively.
NASA Technical Reports Server (NTRS)
Butler, C. F.
1979-01-01
A computer sensitivity analysis was performed to determine the uncertainties involved in the calculation of volcanic aerosol dispersion in the stratosphere using a 2 dimensional model. The Fuego volcanic event of 1974 was used. Aerosol dispersion processes that were included are: transport, sedimentation, gas phase sulfur chemistry, and aerosol growth. Calculated uncertainties are established from variations in the stratospheric aerosol layer decay times at 37 latitude for each dispersion process. Model profiles are also compared with lidar measurements. Results of the computer study are quite sensitive (factor of 2) to the assumed volcanic aerosol source function and the large variations in the parameterized transport between 15 and 20 km at subtropical latitudes. Sedimentation effects are uncertain by up to a factor of 1.5 because of the lack of aerosol size distribution data. The aerosol chemistry and growth, assuming that the stated mechanisms are correct, are essentially complete in several months after the eruption and cannot explain the differences between measured and modeled results.
Sensitivity of water resources in the Delaware River basin to climate variability and change
Ayers, Mark A.; Wolock, David M.; McCabe, Gregory J.; Hay, Lauren E.; Tasker, Gary D.
1993-01-01
Because of the "greenhouse effect," projected increases in atmospheric carbon dioxide levels might cause global warming, which in turn could result in changes in precipitation patterns and evapotranspiration and in increases in sea level. This report describes the greenhouse effect; discusses the problems and uncertainties associated with the detection, prediction, and effects of climatic change, and presents the results of sensitivity-analysis studies of the potential effects of climate change on water resources in the Delaware River basin. On the basis of sensitivity analyses, potentially serious shortfalls of certain water resources in the basin could result if some climatic-change scenarios become true. The results of basin streamflow-model simulations in this study demonstrate the difficulty in distinguishing effects of climatic change on streamflow and water supply from effects of natural variability in current climate. The future direction of basin changes in most water resources, furthermore, cannot be determined precisely because of uncertainty in current projections of regional temperature and precipitation. This large uncertainty indicates that, for resource planning, information defining the sensitivities of water resources to a range of climate change is most relevant. The sensitivity analyses could be useful in developing contingency plans on how to evaluate and respond to changes, should they occur.
NASA Astrophysics Data System (ADS)
Almeida, Susana; Holcombe, Liz; Pianosi, Francesca; Wagener, Thorsten
2016-04-01
Landslides have many negative economic and societal impacts, including the potential for significant loss of life and damage to infrastructure. Slope stability assessment can be used to guide decisions about the management of landslide risk, but its usefulness can be challenged by high levels of uncertainty in predicting landslide occurrence. Prediction uncertainty may be associated with the choice of model that is used to assess slope stability, the quality of the available input data, or a lack of knowledge of how future climatic and socio-economic changes may affect future landslide risk. While some of these uncertainties can be characterised by relatively well-defined probability distributions, for other uncertainties, such as those linked to climate change, no probability distribution is available to characterise them. This latter type of uncertainty, often referred to as deep uncertainty, means that robust policies need to be developed that are expected to perform acceptably well over a wide range of future conditions. In our study the impact of deep uncertainty on slope stability predictions is assessed in a quantitative and structured manner using Global Sensitivity Analysis (GSA) and the Combined Hydrology and Stability Model (CHASM). In particular, we use several GSA methods including the Method of Morris, Regional Sensitivity Analysis and Classification and Regression Trees (CART), as well as advanced visualization tools, to assess the combination of conditions that may lead to slope failure. Our example application is a slope in the Caribbean, an area that is naturally susceptible to landslides due to a combination of high rainfall rates during the hurricane season, steep slopes, and highly weathered residual soils. Rapid unplanned urbanisation and changing climate may further exacerbate landslide risk in the future. Our example shows how we can gain useful information in the presence of deep uncertainty by combining physically based models with GSA in a scenario discovery framework.
Stochastic techno-economic analysis of alcohol-to-jet fuel production.
Yao, Guolin; Staples, Mark D; Malina, Robert; Tyner, Wallace E
2017-01-01
Alcohol-to-jet (ATJ) is one of the technical feasible biofuel technologies. It produces jet fuel from sugary, starchy, and lignocellulosic biomass, such as sugarcane, corn grain, and switchgrass, via fermentation of sugars to ethanol or other alcohols. This study assesses the ATJ biofuel production pathway for these three biomass feedstocks, and advances existing techno-economic analyses of biofuels in three ways. First, we incorporate technical uncertainty for all by-products and co-products though statistical linkages between conversion efficiencies and input and output levels. Second, future price uncertainty is based on case-by-case time-series estimation, and a local sensitivity analysis is conducted with respect to each uncertain variable. Third, breakeven price distributions are developed to communicate the inherent uncertainty in breakeven price. This research also considers uncertainties in utility input requirements, fuel and by-product outputs, as well as price uncertainties for all major inputs, products, and co-products. All analyses are done from the perspective of a private firm. The stochastic dominance results of net present values (NPV) and breakeven price distributions show that sugarcane is the lowest cost feedstock over the entire range of uncertainty with the least risks, followed by corn grain and switchgrass, with the mean breakeven jet fuel prices being $0.96/L ($3.65/gal), $1.01/L ($3.84/gal), and $1.38/L ($5.21/gal), respectively. The variation of revenues from by-products in corn grain pathway can significantly impact its profitability. Sensitivity analyses show that technical uncertainty significantly impacts breakeven price and NPV distributions. Technical uncertainty is critical in determining the economic performance of the ATJ fuel pathway. Technical uncertainty needs to be considered in future economic analyses. The variation of revenues from by-products plays a significant role in profitability. With the distribution of breakeven prices, potential investors can apply whatever risk preferences they like to determine an appropriate bid or breakeven price that matches their risk profile.
NASA Astrophysics Data System (ADS)
Devendran, A. A.; Lakshmanan, G.
2014-11-01
Data quality for GIS processing and analysis is becoming an increased concern due to the accelerated application of GIS technology for problem solving and decision making roles. Uncertainty in the geographic representation of the real world arises as these representations are incomplete. Identification of the sources of these uncertainties and the ways in which they operate in GIS based representations become crucial in any spatial data representation and geospatial analysis applied to any field of application. This paper reviews the articles on the various components of spatial data quality and various uncertainties inherent in them and special focus is paid to two fields of application such as Urban Simulation and Hydrological Modelling. Urban growth is a complicated process involving the spatio-temporal changes of all socio-economic and physical components at different scales. Cellular Automata (CA) model is one of the simulation models, which randomly selects potential cells for urbanisation and the transition rules evaluate the properties of the cell and its neighbour. Uncertainty arising from CA modelling is assessed mainly using sensitivity analysis including Monte Carlo simulation method. Likewise, the importance of hydrological uncertainty analysis has been emphasized in recent years and there is an urgent need to incorporate uncertainty estimation into water resources assessment procedures. The Soil and Water Assessment Tool (SWAT) is a continuous time watershed model to evaluate various impacts of land use management and climate on hydrology and water quality. Hydrological model uncertainties using SWAT model are dealt primarily by Generalized Likelihood Uncertainty Estimation (GLUE) method.
Uncertainty of climate change impact on groundwater reserves - Application to a chalk aquifer
NASA Astrophysics Data System (ADS)
Goderniaux, Pascal; Brouyère, Serge; Wildemeersch, Samuel; Therrien, René; Dassargues, Alain
2015-09-01
Recent studies have evaluated the impact of climate change on groundwater resources for different geographical and climatic contexts. However, most studies have either not estimated the uncertainty around projected impacts or have limited the analysis to the uncertainty related to climate models. In this study, the uncertainties around impact projections from several sources (climate models, natural variability of the weather, hydrological model calibration) are calculated and compared for the Geer catchment (465 km2) in Belgium. We use a surface-subsurface integrated model implemented using the finite element code HydroGeoSphere, coupled with climate change scenarios (2010-2085) and the UCODE_2005 inverse model, to assess the uncertainty related to the calibration of the hydrological model. This integrated model provides a more realistic representation of the water exchanges between surface and subsurface domains and constrains more the calibration with the use of both surface and subsurface observed data. Sensitivity and uncertainty analyses were performed on predictions. The linear uncertainty analysis is approximate for this nonlinear system, but it provides some measure of uncertainty for computationally demanding models. Results show that, for the Geer catchment, the most important uncertainty is related to calibration of the hydrological model. The total uncertainty associated with the prediction of groundwater levels remains large. By the end of the century, however, the uncertainty becomes smaller than the predicted decline in groundwater levels.
NASA Astrophysics Data System (ADS)
García-Moreno, Angel-Iván; González-Barbosa, José-Joel; Ramírez-Pedraza, Alfonso; Hurtado-Ramos, Juan B.; Ornelas-Rodriguez, Francisco-Javier
2016-04-01
Computer-based reconstruction models can be used to approximate urban environments. These models are usually based on several mathematical approximations and the usage of different sensors, which implies dependency on many variables. The sensitivity analysis presented in this paper is used to weigh the relative importance of each uncertainty contributor into the calibration of a panoramic camera-LiDAR system. Both sensors are used for three-dimensional urban reconstruction. Simulated and experimental tests were conducted. For the simulated tests we analyze and compare the calibration parameters using the Monte Carlo and Latin hypercube sampling techniques. Sensitivity analysis for each variable involved into the calibration was computed by the Sobol method, which is based on the analysis of the variance breakdown, and the Fourier amplitude sensitivity test method, which is based on Fourier's analysis. Sensitivity analysis is an essential tool in simulation modeling and for performing error propagation assessments.
Uncertainty Quantification and Sensitivity Analysis in the CICE v5.1 Sea Ice Model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, J. R.; Urban, N. M.
2015-12-01
Changes in the high latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with mid latitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. In this work we characterize parametric uncertainty in Los Alamos Sea Ice model (CICE) and quantify the sensitivity of sea ice area, extent and volume with respect to uncertainty in about 40 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one-at-a-time, this study uses a global variance-based approach in which Sobol sequences are used to efficiently sample the full 40-dimensional parameter space. This approach requires a very large number of model evaluations, which are expensive to run. A more computationally efficient approach is implemented by training and cross-validating a surrogate (emulator) of the sea ice model with model output from 400 model runs. The emulator is used to make predictions of sea ice extent, area, and volume at several model configurations, which are then used to compute the Sobol sensitivity indices of the 40 parameters. A ranking based on the sensitivity indices indicates that model output is most sensitive to snow parameters such as conductivity and grain size, and the drainage of melt ponds. The main effects and interactions among the most influential parameters are also estimated by a non-parametric regression technique based on generalized additive models. It is recommended research to be prioritized towards more accurately determining these most influential parameters values by observational studies or by improving existing parameterizations in the sea ice model.
Influential input classification in probabilistic multimedia models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.
1999-05-01
Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finsterle, Stefan A.
2010-11-01
iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional , multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. It performs sensitivity analysis, parameter estimation, and uncertainty propagation, analysis in geosciences and reservoir engineering and other application areas. It supports a number of different combination of fluids and components [equation-of-state (EOS) modules]. In addition, the optimization routines implemented in iTOUGH2 can also be used or sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files. This link is achieved by means of the PEST application programmingmore » interface. iTOUGH2 solves the inverse problem by minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative fee, gradient-based and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlos simulation for uncertainty propagation analysis. A detailed residual and error analysis is provided. This upgrade includes new EOS modules (specifically EOS7c, ECO2N and TMVOC), hysteretic relative permeability and capillary pressure functions and the PEST API. More details can be found at http://esd.lbl.gov/iTOUGH2 and the publications cited there. Hardware Req.: Multi-platform; Related/auxiliary software PVM (if running in parallel).« less
Decision analysis of shoreline protection under climate change uncertainty
NASA Astrophysics Data System (ADS)
Chao, Philip T.; Hobbs, Benjamin F.
1997-04-01
If global warming occurs, it could significantly affect water resource distribution and availability. Yet it is unclear whether the prospect of such change is relevant to water resources management decisions being made today. We model a shoreline protection decision problem with a stochastic dynamic program (SDP) to determine whether consideration of the possibility of climate change would alter the decision. Three questions are addressed with the SDP: (l) How important is climate change compared to other uncertainties?, (2) What is the economic loss if climate change uncertainty is ignored?, and (3) How does belief in climate change affect the timing of the decision? In the case study, sensitivity analysis shows that uncertainty in real discount rates has a stronger effect upon the decision than belief in climate change. Nevertheless, a strong belief in climate change makes the shoreline protection project less attractive and often alters the decision to build it.
Robinson, Mike J F; Anselme, Patrick; Suchomel, Kristen; Berridge, Kent C
2015-08-01
Amphetamine and stress can sensitize mesolimbic dopamine-related systems. In Pavlovian autoshaping, repeated exposure to uncertainty of reward prediction can enhance motivated sign-tracking or attraction to a discrete reward-predicting cue (lever-conditioned stimulus; CS+), as well as produce cross-sensitization to amphetamine. However, it remains unknown how amphetamine sensitization or repeated restraint stress interact with uncertainty in controlling CS+ incentive salience attribution reflected in sign-tracking. Here rats were tested in 3 successive phases. First, different groups underwent either induction of amphetamine sensitization or repeated restraint stress, or else were not sensitized or stressed as control groups (either saline injections only, or no stress or injection at all). All next received Pavlovian autoshaping training under either certainty conditions (100% CS-UCS association) or uncertainty conditions (50% CS-UCS association and uncertain reward magnitude). During training, rats were assessed for sign-tracking to the CS+ lever versus goal-tracking to the sucrose dish. Finally, all groups were tested for psychomotor sensitization of locomotion revealed by an amphetamine challenge. Our results confirm that reward uncertainty enhanced sign-tracking attraction toward the predictive CS+ lever, at the expense of goal-tracking. We also reported that amphetamine sensitization promoted sign-tracking even in rats trained under CS-UCS certainty conditions, raising them to sign-tracking levels equivalent to the uncertainty group. Combining amphetamine sensitization and uncertainty conditions did not add together to elevate sign-tracking further above the relatively high levels induced by either manipulation alone. In contrast, repeated restraint stress enhanced subsequent amphetamine-elicited locomotion, but did not enhance CS+ attraction. (c) 2015 APA, all rights reserved).
Robinson, Mike J.F.; Anselme, Patrick; Suchomel, Kristen; Berridge, Kent C.
2015-01-01
Amphetamine and stress can sensitize mesolimbic dopamine-related systems. In Pavlovian autoshaping, repeated exposure to uncertainty of reward prediction can enhance motivated sign-tracking or attraction to a discrete reward-predicting cue (lever CS+), as well as produce cross-sensitization to amphetamine. However, it remains unknown how amphetamine-sensitization or repeated restraint stress interact with uncertainty in controlling CS+ incentive salience attribution reflected in sign-tracking. Here rats were tested in three successive phases. First, different groups underwent either induction of amphetamine sensitization or repeated restraint stress, or else were not sensitized or stressed as control groups (either saline injections only, or no stress or injection at all). All next received Pavlovian autoshaping training under either certainty conditions (100% CS-UCS association) or uncertainty conditions (50% CS-UCS association and uncertain reward magnitude). During training, rats were assessed for sign-tracking to the lever CS+ versus goal-tracking to the sucrose dish. Finally, all groups were tested for psychomotor sensitization of locomotion revealed by an amphetamine challenge. Our results confirm that reward uncertainty enhanced sign-tracking attraction toward the predictive CS+ lever, at the expense of goal-tracking. We also report that amphetamine sensitization promoted sign-tracking even in rats trained under CS-UCS certainty conditions, raising them to sign-tracking levels equivalent to the uncertainty group. Combining amphetamine sensitization and uncertainty conditions together did not add together to elevate sign-tracking further above the relatively high levels induced by either manipulation alone. In contrast, repeated restraint stress enhanced subsequent amphetamine-elicited locomotion, but did not enhance CS+ attraction. PMID:26076340
Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V.; Petway, Joy R.
2017-01-01
This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH3-N and NO3-N. Results indicate that the integrated FME-GLUE-based model, with good Nash–Sutcliffe coefficients (0.53–0.69) and correlation coefficients (0.76–0.83), successfully simulates the concentrations of ON-N, NH3-N and NO3-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH3-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO3-N simulation, which was measured using global sensitivity. PMID:28704958
Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy
NASA Astrophysics Data System (ADS)
Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng
2018-06-01
To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.
NASA Technical Reports Server (NTRS)
Li, Tao; Hasegawa, Toshihiro; Yin, Xinyou; Zhu, Yan; Boote, Kenneth; Adam, Myriam; Bregaglio, Simone; Buis, Samuel; Confalonieri, Roberto; Fumoto, Tamon;
2014-01-01
Predicting rice (Oryza sativa) productivity under future climates is important for global food security. Ecophysiological crop models in combination with climate model outputs are commonly used in yield prediction, but uncertainties associated with crop models remain largely unquantified. We evaluated 13 rice models against multi-year experimental yield data at four sites with diverse climatic conditions in Asia and examined whether different modeling approaches on major physiological processes attribute to the uncertainties of prediction to field measured yields and to the uncertainties of sensitivity to changes in temperature and CO2 concentration [CO2]. We also examined whether a use of an ensemble of crop models can reduce the uncertainties. Individual models did not consistently reproduce both experimental and regional yields well, and uncertainty was larger at the warmest and coolest sites. The variation in yield projections was larger among crop models than variation resulting from 16 global climate model-based scenarios. However, the mean of predictions of all crop models reproduced experimental data, with an uncertainty of less than 10 percent of measured yields. Using an ensemble of eight models calibrated only for phenology or five models calibrated in detail resulted in the uncertainty equivalent to that of the measured yield in well-controlled agronomic field experiments. Sensitivity analysis indicates the necessity to improve the accuracy in predicting both biomass and harvest index in response to increasing [CO2] and temperature.
NASA Astrophysics Data System (ADS)
Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.
2014-12-01
Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping points) in the face of environmental and anthropogenic change (Perz, Muñoz-Carpena, Kiker and Holt, 2013), and through MonteCarlo mapping potential management activities over the most important factors or processes to influence the system towards behavioral (desirable) outcomes (Chu-Agor, Muñoz-Carpena et al., 2012).
Uncertainty in the Bayesian meta-analysis of normally distributed surrogate endpoints
Thompson, John R; Spata, Enti; Abrams, Keith R
2015-01-01
We investigate the effect of the choice of parameterisation of meta-analytic models and related uncertainty on the validation of surrogate endpoints. Different meta-analytical approaches take into account different levels of uncertainty which may impact on the accuracy of the predictions of treatment effect on the target outcome from the treatment effect on a surrogate endpoint obtained from these models. A range of Bayesian as well as frequentist meta-analytical methods are implemented using illustrative examples in relapsing–remitting multiple sclerosis, where the treatment effect on disability worsening is the primary outcome of interest in healthcare evaluation, while the effect on relapse rate is considered as a potential surrogate to the effect on disability progression, and in gastric cancer, where the disease-free survival has been shown to be a good surrogate endpoint to the overall survival. Sensitivity analysis was carried out to assess the impact of distributional assumptions on the predictions. Also, sensitivity to modelling assumptions and performance of the models were investigated by simulation. Although different methods can predict mean true outcome almost equally well, inclusion of uncertainty around all relevant parameters of the model may lead to less certain and hence more conservative predictions. When investigating endpoints as candidate surrogate outcomes, a careful choice of the meta-analytical approach has to be made. Models underestimating the uncertainty of available evidence may lead to overoptimistic predictions which can then have an effect on decisions made based on such predictions. PMID:26271918
Uncertainty in the Bayesian meta-analysis of normally distributed surrogate endpoints.
Bujkiewicz, Sylwia; Thompson, John R; Spata, Enti; Abrams, Keith R
2017-10-01
We investigate the effect of the choice of parameterisation of meta-analytic models and related uncertainty on the validation of surrogate endpoints. Different meta-analytical approaches take into account different levels of uncertainty which may impact on the accuracy of the predictions of treatment effect on the target outcome from the treatment effect on a surrogate endpoint obtained from these models. A range of Bayesian as well as frequentist meta-analytical methods are implemented using illustrative examples in relapsing-remitting multiple sclerosis, where the treatment effect on disability worsening is the primary outcome of interest in healthcare evaluation, while the effect on relapse rate is considered as a potential surrogate to the effect on disability progression, and in gastric cancer, where the disease-free survival has been shown to be a good surrogate endpoint to the overall survival. Sensitivity analysis was carried out to assess the impact of distributional assumptions on the predictions. Also, sensitivity to modelling assumptions and performance of the models were investigated by simulation. Although different methods can predict mean true outcome almost equally well, inclusion of uncertainty around all relevant parameters of the model may lead to less certain and hence more conservative predictions. When investigating endpoints as candidate surrogate outcomes, a careful choice of the meta-analytical approach has to be made. Models underestimating the uncertainty of available evidence may lead to overoptimistic predictions which can then have an effect on decisions made based on such predictions.
DESIGN CHARACTERISTICS OF THE IDAHO NATIONAL LABORATORY HIGH-TEMPERATURE GAS-COOLED TEST REACTOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sterbentz, James; Bayless, Paul; Strydom, Gerhard
2016-11-01
Uncertainty and sensitivity analysis is an indispensable element of any substantial attempt in reactor simulation validation. The quantification of uncertainties in nuclear engineering has grown more important and the IAEA Coordinated Research Program (CRP) on High-Temperature Gas Cooled Reactor (HTGR) initiated in 2012 aims to investigate the various uncertainty quantification methodologies for this type of reactors. The first phase of the CRP is dedicated to the estimation of cell and lattice model uncertainties due to the neutron cross sections co-variances. Phase II is oriented towards the investigation of propagated uncertainties from the lattice to the coupled neutronics/thermal hydraulics core calculations.more » Nominal results for the prismatic single block (Ex.I-2a) and super cell models (Ex.I-2c) have been obtained using the SCALE 6.1.3 two-dimensional lattice code NEWT coupled to the TRITON sequence for cross section generation. In this work, the TRITON/NEWT-flux-weighted cross sections obtained for Ex.I-2a and various models of Ex.I-2c is utilized to perform a sensitivity analysis of the MHTGR-350 core power densities and eigenvalues. The core solutions are obtained with the INL coupled code PHISICS/RELAP5-3D, utilizing a fixed-temperature feedback for Ex. II-1a.. It is observed that the core power density does not vary significantly in shape, but the magnitude of these variations increases as the moderator-to-fuel ratio increases in the super cell lattice models.« less
TSUNAMI Primer: A Primer for Sensitivity/Uncertainty Calculations with SCALE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Mueller, Don; Bowman, Stephen M
2009-01-01
This primer presents examples in the application of the SCALE/TSUNAMI tools to generate k{sub eff} sensitivity data for one- and three-dimensional models using TSUNAMI-1D and -3D and to examine uncertainties in the computed k{sub eff} values due to uncertainties in the cross-section data used in their calculation. The proper use of unit cell data and need for confirming the appropriate selection of input parameters through direct perturbations are described. The uses of sensitivity and uncertainty data to identify and rank potential sources of computational bias in an application system and TSUNAMI tools for assessment of system similarity using sensitivity andmore » uncertainty criteria are demonstrated. Uses of these criteria in trending analyses to assess computational biases, bias uncertainties, and gap analyses are also described. Additionally, an application of the data adjustment tool TSURFER is provided, including identification of specific details of sources of computational bias.« less
NASA Technical Reports Server (NTRS)
Stone, H. W.; Powell, R. W.
1977-01-01
A six-degree-of-freedom simulation analysis was conducted to examine the effects of the lateral-directional static aerodynamic stability and control uncertainties on the performance of the automatic (no manual inputs) entry-guidance and control systems of the space shuttle orbiter. To establish the acceptable boundaries of the uncertainties, the static aerodynamic characteristics were varied either by applying a multiplier to the aerodynamic parameter or by adding an increment. Control-system modifications were identified that decrease the sensitivity to off-nominal aerodynamics. With these modifications, the acceptable aerodynamic boundaries were determined.
Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perkó, Zoltán, E-mail: Z.Perko@tudelft.nl; Gilli, Luca, E-mail: Gilli@nrg.eu; Lathouwers, Danny, E-mail: D.Lathouwers@tudelft.nl
2014-03-01
The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work ismore » focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both in terms of the accuracy of the resulting PC representation of quantities and the computational costs associated with constructing the sparse PCE. Basis adaptivity also seems to make the employment of PC techniques possible for problems with a higher number of input parameters (15–20), alleviating a well known limitation of the traditional approach. The prospect of larger scale applicability and the simplicity of implementation makes such adaptive PC algorithms particularly appealing for the sensitivity and uncertainty analysis of complex systems and legacy codes.« less
Reservoir Performance Under Future Climate For Basins With Different Hydrologic Sensitivities
NASA Astrophysics Data System (ADS)
Mateus, M. C.; Tullos, D. D.
2013-12-01
In addition to long-standing uncertainties related to variable inflows and market price of power, reservoir operators face a number of new uncertainties related to hydrologic nonstationarity, changing environmental regulations, and rapidly growing water and energy demands. This study investigates the impact, sensitivity, and uncertainty of changing hydrology on hydrosystem performance across different hydrogeologic settings. We evaluate the performance of reservoirs in the Santiam River basin, including a case study in the North Santiam Basin, with high permeability and extensive groundwater storage, and the South Santiam Basin, with low permeability, little groundwater storage and rapid runoff response. The modeling objective is to address the following study questions: (1) for the two hydrologic regimes, how does the flood management, water supply, and environmental performance of current reservoir operations change under future 2.5, 50 and 97.5 percentile streamflow projections; and (2) how much change in inflow is required to initiate a failure to meet downstream minimum or maximum flows in the future. We couple global climate model results with a rainfall-runoff model and a formal Bayesian uncertainty analysis to simulate future inflow hydrographs as inputs to a reservoir operations model. To evaluate reservoir performance under a changing climate, we calculate reservoir refill reliability, changes in flood frequency, and reservoir time and volumetric reliability of meeting minimum spring and summer flow target. Reservoir performance under future hydrology appears to vary with hydrogeology. We find higher sensitivity to floods for the North Santiam Basin and higher sensitivity to minimum flow targets for the South Santiam Basin. Higher uncertainty is related with basins with a more complex hydrologeology. Results from model simulations contribute to understanding of the reliability and vulnerability of reservoirs to a changing climate.
NASA Astrophysics Data System (ADS)
Zhuo, L.; Mekonnen, M. M.; Hoekstra, A. Y.
2014-06-01
Water Footprint Assessment is a fast-growing field of research, but as yet little attention has been paid to the uncertainties involved. This study investigates the sensitivity of and uncertainty in crop water footprint (in m3 t-1) estimates related to uncertainties in important input variables. The study focuses on the green (from rainfall) and blue (from irrigation) water footprint of producing maize, soybean, rice, and wheat at the scale of the Yellow River basin in the period 1996-2005. A grid-based daily water balance model at a 5 by 5 arcmin resolution was applied to compute green and blue water footprints of the four crops in the Yellow River basin in the period considered. The one-at-a-time method was carried out to analyse the sensitivity of the crop water footprint to fractional changes of seven individual input variables and parameters: precipitation (PR), reference evapotranspiration (ET0), crop coefficient (Kc), crop calendar (planting date with constant growing degree days), soil water content at field capacity (Smax), yield response factor (Ky) and maximum yield (Ym). Uncertainties in crop water footprint estimates related to uncertainties in four key input variables: PR, ET0, Kc, and crop calendar were quantified through Monte Carlo simulations. The results show that the sensitivities and uncertainties differ across crop types. In general, the water footprint of crops is most sensitive to ET0 and Kc, followed by the crop calendar. Blue water footprints were more sensitive to input variability than green water footprints. The smaller the annual blue water footprint is, the higher its sensitivity to changes in PR, ET0, and Kc. The uncertainties in the total water footprint of a crop due to combined uncertainties in climatic inputs (PR and ET0) were about ±20% (at 95% confidence interval). The effect of uncertainties in ET0was dominant compared to that of PR. The uncertainties in the total water footprint of a crop as a result of combined key input uncertainties were on average ±30% (at 95% confidence level).
Are quantitative sensitivity analysis methods always reliable?
NASA Astrophysics Data System (ADS)
Huang, X.
2016-12-01
Physical parameterizations developed to represent subgrid-scale physical processes include various uncertain parameters, leading to large uncertainties in today's Earth System Models (ESMs). Sensitivity Analysis (SA) is an efficient approach to quantitatively determine how the uncertainty of the evaluation metric can be apportioned to each parameter. Also, SA can identify the most influential parameters, as a result to reduce the high dimensional parametric space. In previous studies, some SA-based approaches, such as Sobol' and Fourier amplitude sensitivity testing (FAST), divide the parameters into sensitive and insensitive groups respectively. The first one is reserved but the other is eliminated for certain scientific study. However, these approaches ignore the disappearance of the interactive effects between the reserved parameters and the eliminated ones, which are also part of the total sensitive indices. Therefore, the wrong sensitive parameters might be identified by these traditional SA approaches and tools. In this study, we propose a dynamic global sensitivity analysis method (DGSAM), which iteratively removes the least important parameter until there are only two parameters left. We use the CLM-CASA, a global terrestrial model, as an example to verify our findings with different sample sizes ranging from 7000 to 280000. The result shows DGSAM has abilities to identify more influential parameters, which is confirmed by parameter calibration experiments using four popular optimization methods. For example, optimization using Top3 parameters filtered by DGSAM could achieve substantial improvement against Sobol' by 10%. Furthermore, the current computational cost for calibration has been reduced to 1/6 of the original one. In future, it is necessary to explore alternative SA methods emphasizing parameter interactions.
Feizizadeh, Bakhtiar; Blaschke, Thomas
2014-03-04
GIS-based multicriteria decision analysis (MCDA) methods are increasingly being used in landslide susceptibility mapping. However, the uncertainties that are associated with MCDA techniques may significantly impact the results. This may sometimes lead to inaccurate outcomes and undesirable consequences. This article introduces a new GIS-based MCDA approach. We illustrate the consequences of applying different MCDA methods within a decision-making process through uncertainty analysis. Three GIS-MCDA methods in conjunction with Monte Carlo simulation (MCS) and Dempster-Shafer theory are analyzed for landslide susceptibility mapping (LSM) in the Urmia lake basin in Iran, which is highly susceptible to landslide hazards. The methodology comprises three stages. First, the LSM criteria are ranked and a sensitivity analysis is implemented to simulate error propagation based on the MCS. The resulting weights are expressed through probability density functions. Accordingly, within the second stage, three MCDA methods, namely analytical hierarchy process (AHP), weighted linear combination (WLC) and ordered weighted average (OWA), are used to produce the landslide susceptibility maps. In the third stage, accuracy assessments are carried out and the uncertainties of the different results are measured. We compare the accuracies of the three MCDA methods based on (1) the Dempster-Shafer theory and (2) a validation of the results using an inventory of known landslides and their respective coverage based on object-based image analysis of IRS-ID satellite images. The results of this study reveal that through the integration of GIS and MCDA models, it is possible to identify strategies for choosing an appropriate method for LSM. Furthermore, our findings indicate that the integration of MCDA and MCS can significantly improve the accuracy of the results. In LSM, the AHP method performed best, while the OWA reveals better performance in the reliability assessment. The WLC operation yielded poor results.
Uncertainty in temperature response of current consumption-based emissions estimates
NASA Astrophysics Data System (ADS)
Karstensen, J.; Peters, G. P.; Andrew, R. M.
2015-05-01
Several studies have connected emissions of greenhouse gases to economic and trade data to quantify the causal chain from consumption to emissions and climate change. These studies usually combine data and models originating from different sources, making it difficult to estimate uncertainties along the entire causal chain. We estimate uncertainties in economic data, multi-pollutant emission statistics, and metric parameters, and use Monte Carlo analysis to quantify contributions to uncertainty and to determine how uncertainty propagates to estimates of global temperature change from regional and sectoral territorial- and consumption-based emissions for the year 2007. We find that the uncertainties are sensitive to the emission allocations, mix of pollutants included, the metric and its time horizon, and the level of aggregation of the results. Uncertainties in the final results are largely dominated by the climate sensitivity and the parameters associated with the warming effects of CO2. Based on our assumptions, which exclude correlations in the economic data, the uncertainty in the economic data appears to have a relatively small impact on uncertainty at the national level in comparison to emissions and metric uncertainty. Much higher uncertainties are found at the sectoral level. Our results suggest that consumption-based national emissions are not significantly more uncertain than the corresponding production-based emissions since the largest uncertainties are due to metric and emissions which affect both perspectives equally. The two perspectives exhibit different sectoral uncertainties, due to changes of pollutant compositions. We find global sectoral consumption uncertainties in the range of ±10 to ±27 % using the Global Temperature Potential with a 50-year time horizon, with metric uncertainties dominating. National-level uncertainties are similar in both perspectives due to the dominance of CO2 over other pollutants. The consumption emissions of the top 10 emitting regions have a broad uncertainty range of ±9 to ±25 %, with metric and emission uncertainties contributing similarly. The absolute global temperature potential (AGTP) with a 50-year time horizon has much higher uncertainties, with considerable uncertainty overlap for regions and sectors, indicating that the ranking of countries is uncertain.
A Risk-Based Approach for Aerothermal/TPS Analysis and Testing
NASA Technical Reports Server (NTRS)
Wright, Michael J.; Grinstead, Jay H.; Bose, Deepak
2007-01-01
The current status of aerothermal and thermal protection system modeling for civilian entry missions is reviewed. For most such missions, the accuracy of our simulations is limited not by the tools and processes currently employed, but rather by reducible deficiencies in the underlying physical models. Improving the accuracy of and reducing the uncertainties in these models will enable a greater understanding of the system level impacts of a particular thermal protection system and of the system operation and risk over the operational life of the system. A strategic plan will be laid out by which key modeling deficiencies can be identified via mission-specific gap analysis. Once these gaps have been identified, the driving component uncertainties are determined via sensitivity analyses. A Monte-Carlo based methodology is presented for physics-based probabilistic uncertainty analysis of aerothermodynamics and thermal protection system material response modeling. These data are then used to advocate for and plan focused testing aimed at reducing key uncertainties. The results of these tests are used to validate or modify existing physical models. Concurrently, a testing methodology is outlined for thermal protection materials. The proposed approach is based on using the results of uncertainty/sensitivity analyses discussed above to tailor ground testing so as to best identify and quantify system performance and risk drivers. A key component of this testing is understanding the relationship between the test and flight environments. No existing ground test facility can simultaneously replicate all aspects of the flight environment, and therefore good models for traceability to flight are critical to ensure a low risk, high reliability thermal protection system design. Finally, the role of flight testing in the overall thermal protection system development strategy is discussed.
NASA Astrophysics Data System (ADS)
Albano, R.; Sole, A.; Mancusi, L.; Cantisani, A.; Perrone, A.
2017-12-01
The considerable increase of flood damages in the the past decades has shifted in Europe the attention from protection against floods to managing flood risks. In this context, the expected damages assessment represents a crucial information within the overall flood risk management process. The present paper proposes an open source software, called FloodRisk, that is able to operatively support stakeholders in the decision making processes with a what-if approach by carrying out the rapid assessment of the flood consequences, in terms of direct economic damage and loss of human lives. The evaluation of the damage scenarios, trough the use of the GIS software proposed here, is essential for cost-benefit or multi-criteria analysis of risk mitigation alternatives. However, considering that quantitative assessment of flood damages scenarios is characterized by intrinsic uncertainty, a scheme has been developed to identify and quantify the role of the input parameters in the total uncertainty of flood loss model application in urban areas with mild terrain and complex topography. By the concept of parallel models, the contribution of different module and input parameters to the total uncertainty is quantified. The results of the present case study have exhibited a high epistemic uncertainty on the damage estimation module and, in particular, on the type and form of the utilized damage functions, which have been adapted and transferred from different geographic and socio-economic contexts because there aren't depth-damage functions that are specifically developed for Italy. Considering that uncertainty and sensitivity depend considerably on local characteristics, the epistemic uncertainty associated with the risk estimate is reduced by introducing additional information into the risk analysis. In the light of the obtained results, it is evident the need to produce and disseminate (open) data to develop micro-scale vulnerability curves. Moreover, the urgent need to push forward research into the implementation of methods and models for the assimilation of uncertainties in decision-making processes emerges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Da Cruz, D. F.; Rochman, D.; Koning, A. J.
2012-07-01
This paper discusses the uncertainty analysis on reactivity and inventory for a typical PWR fuel element as a result of uncertainties in {sup 235,238}U nuclear data. A typical Westinghouse 3-loop fuel assembly fuelled with UO{sub 2} fuel with 4.8% enrichment has been selected. The Total Monte-Carlo method has been applied using the deterministic transport code DRAGON. This code allows the generation of the few-groups nuclear data libraries by directly using data contained in the nuclear data evaluation files. The nuclear data used in this study is from the JEFF3.1 evaluation, and the nuclear data files for {sup 238}U and {supmore » 235}U (randomized for the generation of the various DRAGON libraries) are taken from the nuclear data library TENDL. The total uncertainty (obtained by randomizing all {sup 238}U and {sup 235}U nuclear data in the ENDF files) on the reactor parameters has been split into different components (different nuclear reaction channels). Results show that the TMC method in combination with a deterministic transport code constitutes a powerful tool for performing uncertainty and sensitivity analysis of reactor physics parameters. (authors)« less
Uncertainty for Part Density Determination: An Update
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valdez, Mario Orlando
2016-12-14
Accurate and precise density measurements by hydrostatic weighing requires the use of an analytical balance, configured with a suspension system, to both measure the weight of a part in water and in air. Additionally, the densities of these liquid media (water and air) must be precisely known for the part density determination. To validate the accuracy and precision of these measurements, uncertainty statements are required. The work in this report is a revision of an original report written more than a decade ago, specifically applying principles and guidelines suggested by the Guide to the Expression of Uncertainty in Measurement (GUM)more » for determining the part density uncertainty through sensitivity analysis. In this work, updated derivations are provided; an original example is revised with the updated derivations and appendix, provided solely to uncertainty evaluations using Monte Carlo techniques, specifically using the NIST Uncertainty Machine, as a viable alternative method.« less
Loizeau, Vincent; Ciffroy, Philippe; Roustan, Yelva; Musson-Genon, Luc
2014-09-15
Semi-volatile organic compounds (SVOCs) are subject to Long-Range Atmospheric Transport because of transport-deposition-reemission successive processes. Several experimental data available in the literature suggest that soil is a non-negligible contributor of SVOCs to atmosphere. Then coupling soil and atmosphere in integrated coupled models and simulating reemission processes can be essential for estimating atmospheric concentration of several pollutants. However, the sources of uncertainty and variability are multiple (soil properties, meteorological conditions, chemical-specific parameters) and can significantly influence the determination of reemissions. In order to identify the key parameters in reemission modeling and their effect on global modeling uncertainty, we conducted a sensitivity analysis targeted on the 'reemission' output variable. Different parameters were tested, including soil properties, partition coefficients and meteorological conditions. We performed EFAST sensitivity analysis for four chemicals (benzo-a-pyrene, hexachlorobenzene, PCB-28 and lindane) and different spatial scenari (regional and continental scales). Partition coefficients between air, solid and water phases are influent, depending on the precision of data and global behavior of the chemical. Reemissions showed a lower variability to soil parameters (soil organic matter and water contents at field capacity and wilting point). A mapping of these parameters at a regional scale is sufficient to correctly estimate reemissions when compared to other sources of uncertainty. Copyright © 2014 Elsevier B.V. All rights reserved.
Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model
NASA Technical Reports Server (NTRS)
Nikbay, Melike; Heeg, Jennifer
2017-01-01
This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.
Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra; ...
2017-11-20
The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra
The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less
Matsuyama, Ryota; Lee, Hyojung; Yamaguchi, Takayuki; Tsuzuki, Shinya
2018-01-01
Background A Rohingya refugee camp in Cox’s Bazar, Bangladesh experienced a large-scale diphtheria epidemic in 2017. The background information of previously immune fraction among refugees cannot be explicitly estimated, and thus we conducted an uncertainty analysis of the basic reproduction number, R0. Methods A renewal process model was devised to estimate the R0 and ascertainment rate of cases, and loss of susceptible individuals was modeled as one minus the sum of initially immune fraction and the fraction naturally infected during the epidemic. To account for the uncertainty of initially immune fraction, we employed a Latin Hypercube sampling (LHS) method. Results R0 ranged from 4.7 to 14.8 with the median estimate at 7.2. R0 was positively correlated with ascertainment rates. Sensitivity analysis indicated that R0 would become smaller with greater variance of the generation time. Discussion Estimated R0 was broadly consistent with published estimate from endemic data, indicating that the vaccination coverage of 86% has to be satisfied to prevent the epidemic by means of mass vaccination. LHS was particularly useful in the setting of a refugee camp in which the background health status is poorly quantified. PMID:29629244
A seismic hazard uncertainty analysis for the New Madrid seismic zone
Cramer, C.H.
2001-01-01
A review of the scientific issues relevant to characterizing earthquake sources in the New Madrid seismic zone has led to the development of a logic tree of possible alternative parameters. A variability analysis, using Monte Carlo sampling of this consensus logic tree, is presented and discussed. The analysis shows that for 2%-exceedence-in-50-year hazard, the best-estimate seismic hazard map is similar to previously published seismic hazard maps for the area. For peak ground acceleration (PGA) and spectral acceleration at 0.2 and 1.0 s (0.2 and 1.0 s Sa), the coefficient of variation (COV) representing the knowledge-based uncertainty in seismic hazard can exceed 0.6 over the New Madrid seismic zone and diminishes to about 0.1 away from areas of seismic activity. Sensitivity analyses show that the largest contributor to PGA, 0.2 and 1.0 s Sa seismic hazard variability is the uncertainty in the location of future 1811-1812 New Madrid sized earthquakes. This is followed by the variability due to the choice of ground motion attenuation relation, the magnitude for the 1811-1812 New Madrid earthquakes, and the recurrence interval for M>6.5 events. Seismic hazard is not very sensitive to the variability in seismogenic width and length. Published by Elsevier Science B.V.
Matsuyama, Ryota; Akhmetzhanov, Andrei R; Endo, Akira; Lee, Hyojung; Yamaguchi, Takayuki; Tsuzuki, Shinya; Nishiura, Hiroshi
2018-01-01
A Rohingya refugee camp in Cox's Bazar, Bangladesh experienced a large-scale diphtheria epidemic in 2017. The background information of previously immune fraction among refugees cannot be explicitly estimated, and thus we conducted an uncertainty analysis of the basic reproduction number, R 0 . A renewal process model was devised to estimate the R 0 and ascertainment rate of cases, and loss of susceptible individuals was modeled as one minus the sum of initially immune fraction and the fraction naturally infected during the epidemic. To account for the uncertainty of initially immune fraction, we employed a Latin Hypercube sampling (LHS) method. R 0 ranged from 4.7 to 14.8 with the median estimate at 7.2. R 0 was positively correlated with ascertainment rates. Sensitivity analysis indicated that R 0 would become smaller with greater variance of the generation time. Estimated R 0 was broadly consistent with published estimate from endemic data, indicating that the vaccination coverage of 86% has to be satisfied to prevent the epidemic by means of mass vaccination. LHS was particularly useful in the setting of a refugee camp in which the background health status is poorly quantified.
NASA Astrophysics Data System (ADS)
Christian, Kenneth E.; Brune, William H.; Mao, Jingqiu; Ren, Xinrong
2018-02-01
Making sense of modeled atmospheric composition requires not only comparison to in situ measurements but also knowing and quantifying the sensitivity of the model to its input factors. Using a global sensitivity method involving the simultaneous perturbation of many chemical transport model input factors, we find the model uncertainty for ozone (O3), hydroxyl radical (OH), and hydroperoxyl radical (HO2) mixing ratios, and apportion this uncertainty to specific model inputs for the DC-8 flight tracks corresponding to the NASA Intercontinental Chemical Transport Experiment (INTEX) campaigns of 2004 and 2006. In general, when uncertainties in modeled and measured quantities are accounted for, we find agreement between modeled and measured oxidant mixing ratios with the exception of ozone during the Houston flights of the INTEX-B campaign and HO2 for the flights over the northernmost Pacific Ocean during INTEX-B. For ozone and OH, modeled mixing ratios were most sensitive to a bevy of emissions, notably lightning NOx, various surface NOx sources, and isoprene. HO2 mixing ratios were most sensitive to CO and isoprene emissions as well as the aerosol uptake of HO2. With ozone and OH being generally overpredicted by the model, we find better agreement between modeled and measured vertical profiles when reducing NOx emissions from surface as well as lightning sources.
Probabilistic analysis of bladed turbine disks and the effect of mistuning
NASA Technical Reports Server (NTRS)
Shah, A. R.; Nagpal, V. K.; Chamis, Christos C.
1990-01-01
Probabilistic assessment of the maximum blade response on a mistuned rotor disk is performed using the computer code NESSUS. The uncertainties in natural frequency, excitation frequency, amplitude of excitation and damping are included to obtain the cumulative distribution function (CDF) of blade responses. Advanced mean value first order analysis is used to compute CDF. The sensitivities of different random variables are identified. Effect of the number of blades on a rotor on mistuning is evaluated. It is shown that the uncertainties associated with the forcing function parameters have significant effect on the response distribution of the bladed rotor.
Probabilistic analysis of bladed turbine disks and the effect of mistuning
NASA Technical Reports Server (NTRS)
Shah, Ashwin; Nagpal, V. K.; Chamis, C. C.
1990-01-01
Probabilistic assessment of the maximum blade response on a mistuned rotor disk is performed using the computer code NESSUS. The uncertainties in natural frequency, excitation frequency, amplitude of excitation and damping have been included to obtain the cumulative distribution function (CDF) of blade responses. Advanced mean value first order analysis is used to compute CDF. The sensitivities of different random variables are identified. Effect of the number of blades on a rotor on mistuning is evaluated. It is shown that the uncertainties associated with the forcing function parameters have significant effect on the response distribution of the bladed rotor.
Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen
2006-01-01
This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con
Development of a sensitivity analysis technique for multiloop flight control systems
NASA Technical Reports Server (NTRS)
Vaillard, A. H.; Paduano, J.; Downing, D. R.
1985-01-01
This report presents the development and application of a sensitivity analysis technique for multiloop flight control systems. This analysis yields very useful information on the sensitivity of the relative-stability criteria of the control system, with variations or uncertainties in the system and controller elements. The sensitivity analysis technique developed is based on the computation of the singular values and singular-value gradients of a feedback-control system. The method is applicable to single-input/single-output as well as multiloop continuous-control systems. Application to sampled-data systems is also explored. The sensitivity analysis technique was applied to a continuous yaw/roll damper stability augmentation system of a typical business jet, and the results show that the analysis is very useful in determining the system elements which have the largest effect on the relative stability of the closed-loop system. As a secondary product of the research reported here, the relative stability criteria based on the concept of singular values were explored.
Sumner, T; Shephard, E; Bogle, I D L
2012-09-07
One of the main challenges in the development of mathematical and computational models of biological systems is the precise estimation of parameter values. Understanding the effects of uncertainties in parameter values on model behaviour is crucial to the successful use of these models. Global sensitivity analysis (SA) can be used to quantify the variability in model predictions resulting from the uncertainty in multiple parameters and to shed light on the biological mechanisms driving system behaviour. We present a new methodology for global SA in systems biology which is computationally efficient and can be used to identify the key parameters and their interactions which drive the dynamic behaviour of a complex biological model. The approach combines functional principal component analysis with established global SA techniques. The methodology is applied to a model of the insulin signalling pathway, defects of which are a major cause of type 2 diabetes and a number of key features of the system are identified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denman, Matthew R.; Brooks, Dusty Marie
Sandia National Laboratories (SNL) has conducted an uncertainty analysi s (UA) on the Fukushima Daiichi unit (1F1) accident progression wit h the MELCOR code. Volume I of the 1F1 UA discusses the physical modeling details and time history results of the UA. Volume II of the 1F1 UA discusses the statistical viewpoint. The model used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). The goal of this work was to perform a focused evaluation of uncertainty in core damage progression behavior and its effect on keymore » figures - of - merit (e.g., hydrogen production, fraction of intact fuel, vessel lower head failure) and in doing so assess the applicability of traditional sensitivity analysis techniques .« less
3MRA UNCERTAINTY AND SENSITIVITY ANALYSIS
This presentation discusses the Multimedia, Multipathway, Multireceptor Risk Assessment (3MRA) modeling system. The outline of the presentation is: modeling system overview - 3MRA versions; 3MRA version 1.0; national-scale assessment dimensionality; SuperMUSE: windows-based super...
Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Yuan, Zibing; Russell, Armistead G; Ou, Jiamin; Zhong, Zhuangmin
2017-04-04
The traditional reduced-form model (RFM) based on the high-order decoupled direct method (HDDM), is an efficient uncertainty analysis approach for air quality models, but it has large biases in uncertainty propagation due to the limitation of the HDDM in predicting nonlinear responses to large perturbations of model inputs. To overcome the limitation, a new stepwise-based RFM method that combines several sets of local sensitive coefficients under different conditions is proposed. Evaluations reveal that the new RFM improves the prediction of nonlinear responses. The new method is applied to quantify uncertainties in simulated PM 2.5 concentrations in the Pearl River Delta (PRD) region of China as a case study. Results show that the average uncertainty range of hourly PM 2.5 concentrations is -28% to 57%, which can cover approximately 70% of the observed PM 2.5 concentrations, while the traditional RFM underestimates the upper bound of the uncertainty range by 1-6%. Using a variance-based method, the PM 2.5 boundary conditions and primary PM 2.5 emissions are found to be the two major uncertainty sources in PM 2.5 simulations. The new RFM better quantifies the uncertainty range in model simulations and can be applied to improve applications that rely on uncertainty information.
NASA Astrophysics Data System (ADS)
Memarian, Hadi; Pourreza Bilondi, Mohsen; Rezaei, Majid
2016-08-01
This work aims to assess the capability of co-active neuro-fuzzy inference system (CANFIS) for drought forecasting of Birjand, Iran through the combination of global climatic signals with rainfall and lagged values of Standardized Precipitation Index (SPI) index. Using stepwise regression and correlation analyses, the signals NINO 1 + 2, NINO 3, Multivariate Enso Index, Tropical Southern Atlantic index, Atlantic Multi-decadal Oscillation index, and NINO 3.4 were recognized as the effective signals on the drought event in Birjand. Based on the results from stepwise regression analysis and regarding the processor limitations, eight models were extracted for further processing by CANFIS. The metrics P-factor and D-factor were utilized for uncertainty analysis, based on the sequential uncertainty fitting algorithm. Sensitivity analysis showed that for all models, NINO indices and rainfall variable had the largest impact on network performance. In model 4 (as the model with the lowest error during training and testing processes), NINO 1 + 2(t-5) with an average sensitivity of 0.7 showed the highest impact on network performance. Next, the variables rainfall, NINO 1 + 2(t), and NINO 3(t-6) with the average sensitivity of 0.59, 0.28, and 0.28, respectively, could have the highest effect on network performance. The findings based on network performance metrics indicated that the global indices with a time lag represented a better correlation with El Niño Southern Oscillation (ENSO). Uncertainty analysis of the model 4 demonstrated that 68 % of the observed data were bracketed by the 95PPU and D-Factor value (0.79) was also within a reasonable range. Therefore, the fourth model with a combination of the input variables NINO 1 + 2 (with 5 months of lag and without any lag), monthly rainfall, and NINO 3 (with 6 months of lag) and correlation coefficient of 0.903 (between observed and simulated SPI) was selected as the most accurate model for drought forecasting using CANFIS in the climatic region of Birjand.
NASA Astrophysics Data System (ADS)
Babendreier, J. E.
2002-05-01
Evaluating uncertainty and parameter sensitivity in environmental models can be a difficult task, even for low-order, single-media constructs driven by a unique set of site-specific data. The challenge of examining ever more complex, integrated, higher-order models is a formidable one, particularly in regulatory settings applied on a national scale. Quantitative assessment of uncertainty and sensitivity within integrated, multimedia models that simulate hundreds of sites, spanning multiple geographical and ecological regions, will ultimately require a systematic, comparative approach coupled with sufficient computational power. The Multimedia, Multipathway, and Multireceptor Risk Assessment Model (3MRA) is an important code being developed by the United States Environmental Protection Agency for use in site-scale risk assessment (e.g. hazardous waste management facilities). The model currently entails over 700 variables, 185 of which are explicitly stochastic. The 3MRA can start with a chemical concentration in a waste management unit (WMU). It estimates the release and transport of the chemical throughout the environment, and predicts associated exposure and risk. The 3MRA simulates multimedia (air, water, soil, sediments), pollutant fate and transport, multipathway exposure routes (food ingestion, water ingestion, soil ingestion, air inhalation, etc.), multireceptor exposures (resident, gardener, farmer, fisher, ecological habitats and populations), and resulting risk (human cancer and non-cancer effects, ecological population and community effects). The 3MRA collates the output for an overall national risk assessment, offering a probabilistic strategy as a basis for regulatory decisions. To facilitate model execution of 3MRA for purposes of conducting uncertainty and sensitivity analysis, a PC-based supercomputer cluster was constructed. Design of SuperMUSE, a 125 GHz Windows-based Supercomputer for Model Uncertainty and Sensitivity Evaluation is described, along with the conceptual layout of an accompanying java-based paralleling software toolset. Preliminary work is also reported for a scenario involving Benzene disposal that describes the relative importance of the vadose zone in driving risk levels for ecological receptors and human health. Incorporating landfills, waste piles, aerated tanks, surface impoundments, and land application units, the site-based data used in the analysis included 201 national facilities representing 419 site-WMU combinations.
Modeling transport phenomena and uncertainty quantification in solidification processes
NASA Astrophysics Data System (ADS)
Fezi, Kyle S.
Direct chill (DC) casting is the primary processing route for wrought aluminum alloys. This semicontinuous process consists of primary cooling as the metal is pulled through a water cooled mold followed by secondary cooling with a water jet spray and free falling water. To gain insight into this complex solidification process, a fully transient model of DC casting was developed to predict the transport phenomena of aluminum alloys for various conditions. This model is capable of solving mixture mass, momentum, energy, and species conservation equations during multicomponent solidification. Various DC casting process parameters were examined for their effect on transport phenomena predictions in an alloy of commercial interest (aluminum alloy 7050). The practice of placing a wiper to divert cooling water from the ingot surface was studied and the results showed that placement closer to the mold causes remelting at the surface and increases susceptibility to bleed outs. Numerical models of metal alloy solidification, like the one previously mentioned, are used to gain insight into physical phenomena that cannot be observed experimentally. However, uncertainty in model inputs cause uncertainty in results and those insights. The analysis of model assumptions and probable input variability on the level of uncertainty in model predictions has not been calculated in solidification modeling as yet. As a step towards understanding the effect of uncertain inputs on solidification modeling, uncertainty quantification (UQ) and sensitivity analysis were first performed on a transient solidification model of a simple binary alloy (Al-4.5wt.%Cu) in a rectangular cavity with both columnar and equiaxed solid growth models. This analysis was followed by quantifying the uncertainty in predictions from the recently developed transient DC casting model. The PRISM Uncertainty Quantification (PUQ) framework quantified the uncertainty and sensitivity in macrosegregation, solidification time, and sump profile predictions. Uncertain model inputs of interest included the secondary dendrite arm spacing, equiaxed particle size, equiaxed packing fraction, heat transfer coefficient, and material properties. The most influential input parameters for predicting the macrosegregation level were the dendrite arm spacing, which also strongly depended on the choice of mushy zone permeability model, and the equiaxed packing fraction. Additionally, the degree of uncertainty required to produce accurate predictions depended on the output of interest from the model.
Development of probabilistic internal dosimetry computer code
NASA Astrophysics Data System (ADS)
Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki
2017-02-01
Internal radiation dose assessment involves biokinetic models, the corresponding parameters, measured data, and many assumptions. Every component considered in the internal dose assessment has its own uncertainty, which is propagated in the intake activity and internal dose estimates. For research or scientific purposes, and for retrospective dose reconstruction for accident scenarios occurring in workplaces having a large quantity of unsealed radionuclides, such as nuclear power plants, nuclear fuel cycle facilities, and facilities in which nuclear medicine is practiced, a quantitative uncertainty assessment of the internal dose is often required. However, no calculation tools or computer codes that incorporate all the relevant processes and their corresponding uncertainties, i.e., from the measured data to the committed dose, are available. Thus, the objective of the present study is to develop an integrated probabilistic internal-dose-assessment computer code. First, the uncertainty components in internal dosimetry are identified, and quantitative uncertainty data are collected. Then, an uncertainty database is established for each component. In order to propagate these uncertainties in an internal dose assessment, a probabilistic internal-dose-assessment system that employs the Bayesian and Monte Carlo methods. Based on the developed system, we developed a probabilistic internal-dose-assessment code by using MATLAB so as to estimate the dose distributions from the measured data with uncertainty. Using the developed code, we calculated the internal dose distribution and statistical values ( e.g. the 2.5th, 5th, median, 95th, and 97.5th percentiles) for three sample scenarios. On the basis of the distributions, we performed a sensitivity analysis to determine the influence of each component on the resulting dose in order to identify the major component of the uncertainty in a bioassay. The results of this study can be applied to various situations. In cases of severe internal exposure, the causation probability of a deterministic health effect can be derived from the dose distribution, and a high statistical value ( e.g., the 95th percentile of the distribution) can be used to determine the appropriate intervention. The distribution-based sensitivity analysis can also be used to quantify the contribution of each factor to the dose uncertainty, which is essential information for reducing and optimizing the uncertainty in the internal dose assessment. Therefore, the present study can contribute to retrospective dose assessment for accidental internal exposure scenarios, as well as to internal dose monitoring optimization and uncertainty reduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadgu, Teklu; Appel, Gordon John
Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the currentmore » analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less
Multiscale contact mechanics model for RF-MEMS switches with quantified uncertainties
NASA Astrophysics Data System (ADS)
Kim, Hojin; Huda Shaik, Nurul; Xu, Xin; Raman, Arvind; Strachan, Alejandro
2013-12-01
We introduce a multiscale model for contact mechanics between rough surfaces and apply it to characterize the force-displacement relationship for a metal-dielectric contact relevant for radio frequency micro-electromechanicl system (MEMS) switches. We propose a mesoscale model to describe the history-dependent force-displacement relationships in terms of the surface roughness, the long-range attractive interaction between the two surfaces, and the repulsive interaction between contacting asperities (including elastic and plastic deformation). The inputs to this model are the experimentally determined surface topography and the Hamaker constant as well as the mechanical response of individual asperities obtained from density functional theory calculations and large-scale molecular dynamics simulations. The model captures non-trivial processes including the hysteresis during loading and unloading due to plastic deformation, yet it is computationally efficient enough to enable extensive uncertainty quantification and sensitivity analysis. We quantify how uncertainties and variability in the input parameters, both experimental and theoretical, affect the force-displacement curves during approach and retraction. In addition, a sensitivity analysis quantifies the relative importance of the various input quantities for the prediction of force-displacement during contact closing and opening. The resulting force-displacement curves with quantified uncertainties can be directly used in device-level simulations of micro-switches and enable the incorporation of atomic and mesoscale phenomena in predictive device-scale simulations.
Uncertainty Modeling for Structural Control Analysis and Synthesis
NASA Technical Reports Server (NTRS)
Campbell, Mark E.; Crawley, Edward F.
1996-01-01
The development of an accurate model of uncertainties for the control of structures that undergo a change in operational environment, based solely on modeling and experimentation in the original environment is studied. The application used throughout this work is the development of an on-orbit uncertainty model based on ground modeling and experimentation. A ground based uncertainty model consisting of mean errors and bounds on critical structural parameters is developed. The uncertainty model is created using multiple data sets to observe all relevant uncertainties in the system. The Discrete Extended Kalman Filter is used as an identification/parameter estimation method for each data set, in addition to providing a covariance matrix which aids in the development of the uncertainty model. Once ground based modal uncertainties have been developed, they are localized to specific degrees of freedom in the form of mass and stiffness uncertainties. Two techniques are presented: a matrix method which develops the mass and stiffness uncertainties in a mathematical manner; and a sensitivity method which assumes a form for the mass and stiffness uncertainties in macroelements and scaling factors. This form allows the derivation of mass and stiffness uncertainties in a more physical manner. The mass and stiffness uncertainties of the ground based system are then mapped onto the on-orbit system, and projected to create an analogous on-orbit uncertainty model in the form of mean errors and bounds on critical parameters. The Middeck Active Control Experiment is introduced as experimental verification for the localization and projection methods developed. In addition, closed loop results from on-orbit operations of the experiment verify the use of the uncertainty model for control analysis and synthesis in space.
NASA Astrophysics Data System (ADS)
Girard, Sylvain; Mallet, Vivien; Korsakissok, Irène; Mathieu, Anne
2016-04-01
Simulations of the atmospheric dispersion of radionuclides involve large uncertainties originating from the limited knowledge of meteorological input data, composition, amount and timing of emissions, and some model parameters. The estimation of these uncertainties is an essential complement to modeling for decision making in case of an accidental release. We have studied the relative influence of a set of uncertain inputs on several outputs from the Eulerian model Polyphemus/Polair3D on the Fukushima case. We chose to use the variance-based sensitivity analysis method of Sobol'. This method requires a large number of model evaluations which was not achievable directly due to the high computational cost of Polyphemus/Polair3D. To circumvent this issue, we built a mathematical approximation of the model using Gaussian process emulation. We observed that aggregated outputs are mainly driven by the amount of emitted radionuclides, while local outputs are mostly sensitive to wind perturbations. The release height is notably influential, but only in the vicinity of the source. Finally, averaging either spatially or temporally tends to cancel out interactions between uncertain inputs.
Incorporating uncertainty into medical decision making: an approach to unexpected test results.
Bianchi, Matt T; Alexander, Brian M; Cash, Sydney S
2009-01-01
The utility of diagnostic tests derives from the ability to translate the population concepts of sensitivity and specificity into information that will be useful for the individual patient: the predictive value of the result. As the array of available diagnostic testing broadens, there is a temptation to de-emphasize history and physical findings and defer to the objective rigor of technology. However, diagnostic test interpretation is not always straightforward. One significant barrier to routine use of probability-based test interpretation is the uncertainty inherent in pretest probability estimation, the critical first step of Bayesian reasoning. The context in which this uncertainty presents the greatest challenge is when test results oppose clinical judgment. It is this situation when decision support would be most helpful. The authors propose a simple graphical approach that incorporates uncertainty in pretest probability and has specific application to the interpretation of unexpected results. This method quantitatively demonstrates how uncertainty in disease probability may be amplified when test results are unexpected (opposing clinical judgment), even for tests with high sensitivity and specificity. The authors provide a simple nomogram for determining whether an unexpected test result suggests that one should "switch diagnostic sides.'' This graphical framework overcomes the limitation of pretest probability uncertainty in Bayesian analysis and guides decision making when it is most challenging: interpretation of unexpected test results.
Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality.
Gosling, Simon N; Hondula, David M; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer
2017-08-16
Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to "adaptation uncertainty" (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. This study had three aims: a ) Compare the range in projected impacts that arises from using different adaptation modeling methods; b ) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c ) recommend modeling method(s) to use in future impact assessments. We estimated impacts for 2070-2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634.
NASA Astrophysics Data System (ADS)
Taverniers, Søren; Tartakovsky, Daniel M.
2017-11-01
Predictions of the total energy deposited into a brain tumor through X-ray irradiation are notoriously error-prone. We investigate how this predictive uncertainty is affected by uncertainty in both the location of the region occupied by a dose-enhancing iodinated contrast agent and the agent's concentration. This is done within the probabilistic framework in which these uncertain parameters are modeled as random variables. We employ the stochastic collocation (SC) method to estimate statistical moments of the deposited energy in terms of statistical moments of the random inputs, and the global sensitivity analysis (GSA) to quantify the relative importance of uncertainty in these parameters on the overall predictive uncertainty. A nonlinear radiation-diffusion equation dramatically magnifies the coefficient of variation of the uncertain parameters, yielding a large coefficient of variation for the predicted energy deposition. This demonstrates that accurate prediction of the energy deposition requires a proper treatment of even small parametric uncertainty. Our analysis also reveals that SC outperforms standard Monte Carlo, but its relative efficiency decreases as the number of uncertain parameters increases from one to three. A robust GSA ameliorates this problem by reducing this number.
A Methodology for Robust Comparative Life Cycle Assessments Incorporating Uncertainty.
Gregory, Jeremy R; Noshadravan, Arash; Olivetti, Elsa A; Kirchain, Randolph E
2016-06-21
We propose a methodology for conducting robust comparative life cycle assessments (LCA) by leveraging uncertainty. The method evaluates a broad range of the possible scenario space in a probabilistic fashion while simultaneously considering uncertainty in input data. The method is intended to ascertain which scenarios have a definitive environmentally preferable choice among the alternatives being compared and the significance of the differences given uncertainty in the parameters, which parameters have the most influence on this difference, and how we can identify the resolvable scenarios (where one alternative in the comparison has a clearly lower environmental impact). This is accomplished via an aggregated probabilistic scenario-aware analysis, followed by an assessment of which scenarios have resolvable alternatives. Decision-tree partitioning algorithms are used to isolate meaningful scenario groups. In instances where the alternatives cannot be resolved for scenarios of interest, influential parameters are identified using sensitivity analysis. If those parameters can be refined, the process can be iterated using the refined parameters. We also present definitions of uncertainty quantities that have not been applied in the field of LCA and approaches for characterizing uncertainty in those quantities. We then demonstrate the methodology through a case study of pavements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolaczkowski, A.M.; Lambright, J.A.; Ferrell, W.L.
This document contains the internal event initiated accident sequence analyses for Peach Bottom, Unit 2; one of the reference plants being examined as part of the NUREG-1150 effort by the Nuclear Regulatory Commission. NUREG-1150 will document the risk of a selected group of nuclear power plants. As part of that work, this report contains the overall core damage frequency estimate for Peach Bottom, Unit 2, and the accompanying plant damage state frequencies. Sensitivity and uncertainty analyses provided additional insights regarding the dominant contributors to the Peach Bottom core damage frequency estimate. The mean core damage frequency at Peach Bottom wasmore » calculated to be 8.2E-6. Station blackout type accidents (loss of all ac power) were found to dominate the overall results. Anticipated Transient Without Scram accidents were also found to be non-negligible contributors. The numerical results are largely driven by common mode failure probability estimates and to some extent, human error. Because of significant data and analysis uncertainties in these two areas (important, for instance, to the most dominant scenario in this study), it is recommended that the results of the uncertainty and sensitivity analyses be considered before any actions are taken based on this analysis.« less
Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.
Ligmann-Zielinska, Arika; Kramer, Daniel B.; Spence Cheruvelil, Kendra; Soranno, Patricia A.
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764
DOE Office of Scientific and Technical Information (OSTI.GOV)
HELTON,JON CRAIG; BEAN,J.E.; ECONOMY,K.
2000-05-22
Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP) are presented for two-phase flow in the vicinity of the repository under disturbed conditions resulting from drilling intrusions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformations are used to investigate brine inflow, gas generation repository pressure, brine saturation and brine and gas outflow. Of the variables under study, repository pressure and brine flow from the repository to the Culebra Dolomite are potentially the most important in PA for the WIPP. Subsequentmore » to a drilling intrusion repository pressure was dominated by borehole permeability and generally below the level (i.e., 8 MPa) that could potentially produce spallings and direct brine releases. Brine flow from the repository to the Culebra Dolomite tended to be small or nonexistent with its occurrence and size also dominated by borehole permeability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T.; Jessee, Matthew Anderson
The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T.; Jessee, Matthew Anderson
The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less
Probabilistic Analysis of Solid Oxide Fuel Cell Based Hybrid Gas Turbine System
NASA Technical Reports Server (NTRS)
Gorla, Rama S. R.; Pai, Shantaram S.; Rusick, Jeffrey J.
2003-01-01
The emergence of fuel cell systems and hybrid fuel cell systems requires the evolution of analysis strategies for evaluating thermodynamic performance. A gas turbine thermodynamic cycle integrated with a fuel cell was computationally simulated and probabilistically evaluated in view of the several uncertainties in the thermodynamic performance parameters. Cumulative distribution functions and sensitivity factors were computed for the overall thermal efficiency and net specific power output due to the uncertainties in the thermodynamic random variables. These results can be used to quickly identify the most critical design variables in order to optimize the design and make it cost effective. The analysis leads to the selection of criteria for gas turbine performance.
Multivariate analysis techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bendavid, Josh; Fisher, Wade C.; Junk, Thomas R.
2016-01-01
The end products of experimental data analysis are designed to be simple and easy to understand: hypothesis tests and measurements of parameters. But, the experimental data themselves are voluminous and complex. Furthermore, in modern collider experiments, many petabytes of data must be processed in search of rare new processes which occur together with much more copious background processes that are of less interest to the task at hand. The systematic uncertainties on the background may be larger than the expected signal in many cases. The statistical power of an analysis and its sensitivity to systematic uncertainty can therefore usually bothmore » be improved by separating signal events from background events with higher efficiency and purity.« less
Xie, Shaocheng; Klein, Stephen A.; Zhang, Minghua; ...
2006-10-05
[1] This study represents an effort to develop Single-Column Model (SCM) and Cloud-Resolving Model large-scale forcing data from a sounding array in the high latitudes. An objective variational analysis approach is used to process data collected from the Atmospheric Radiation Measurement Program (ARM) Mixed-Phase Arctic Cloud Experiment (M-PACE), which was conducted over the North Slope of Alaska in October 2004. In this method the observed surface and top of atmosphere measurements are used as constraints to adjust the sounding data from M-PACE in order to conserve column-integrated mass, heat, moisture, and momentum. Several important technical and scientific issues related tomore » the data analysis are discussed. It is shown that the analyzed data reasonably describe the dynamic and thermodynamic features of the Arctic cloud systems observed during M-PACE. Uncertainties in the analyzed forcing fields are roughly estimated by examining the sensitivity of those fields to uncertainties in the upper-air data and surface constraints that are used in the analysis. Impacts of the uncertainties in the analyzed forcing data on SCM simulations are discussed. Results from the SCM tests indicate that the bulk features of the observed Arctic cloud systems can be captured qualitatively well using the forcing data derived in this study, and major model errors can be detected despite the uncertainties that exist in the forcing data as illustrated by the sensitivity tests. Lastly, the possibility of using the European Center for Medium-Range Weather Forecasts analysis data to derive the large-scale forcing over the Arctic region is explored.« less
NASA Astrophysics Data System (ADS)
Sayer, A. M.; Hsu, N. C.; Bettenhausen, C.; Lee, J.; Redemann, J.; Schmid, B.; Shinozuka, Y.
2016-05-01
Cases of absorbing aerosols above clouds (AACs), such as smoke or mineral dust, are omitted from most routinely processed space-based aerosol optical depth (AOD) data products, including those from the Moderate Resolution Imaging Spectroradiometer (MODIS). This study presents a sensitivity analysis and preliminary algorithm to retrieve above-cloud AOD and liquid cloud optical depth (COD) for AAC cases from MODIS or similar sensors, for incorporation into a future version of the "Deep Blue" AOD data product. Detailed retrieval simulations suggest that these sensors should be able to determine AAC AOD with a typical level of uncertainty ˜25-50% (with lower uncertainties for more strongly absorbing aerosol types) and COD with an uncertainty ˜10-20%, if an appropriate aerosol optical model is known beforehand. Errors are larger, particularly if the aerosols are only weakly absorbing, if the aerosol optical properties are not known, and the appropriate model to use must also be retrieved. Actual retrieval errors are also compared to uncertainty envelopes obtained through the optimal estimation (OE) technique; OE-based uncertainties are found to be generally reasonable for COD but larger than actual retrieval errors for AOD, due in part to difficulties in quantifying the degree of spectral correlation of forward model error. The algorithm is also applied to two MODIS scenes (one smoke and one dust) for which near-coincident NASA Ames Airborne Tracking Sun photometer (AATS) data were available to use as a ground truth AOD data source, and found to be in good agreement, demonstrating the validity of the technique with real observations.
Metrological analysis of a virtual flowmeter-based transducer for cryogenic helium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arpaia, P., E-mail: pasquale.arpaia@unina.it; Technology Department, European Organization for Nuclear Research; Girone, M., E-mail: mario.girone@cern.ch
2015-12-15
The metrological performance of a virtual flowmeter-based transducer for monitoring helium under cryogenic conditions is assessed. At this aim, an uncertainty model of the transducer, mainly based on a valve model, exploiting finite-element approach, and a virtual flowmeter model, based on the Sereg-Schlumberger method, are presented. The models are validated experimentally on a case study for helium monitoring in cryogenic systems at the European Organization for Nuclear Research (CERN). The impact of uncertainty sources on the transducer metrological performance is assessed by a sensitivity analysis, based on statistical experiment design and analysis of variance. In this way, the uncertainty sourcesmore » most influencing metrological performance of the transducer are singled out over the input range as a whole, at varying operating and setting conditions. This analysis turns out to be important for CERN cryogenics operation because the metrological design of the transducer is validated, and its components and working conditions with critical specifications for future improvements are identified.« less
Simulation-based sensitivity analysis for non-ignorably missing data.
Yin, Peng; Shi, Jian Q
2017-01-01
Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.
NASA Astrophysics Data System (ADS)
Khuwaileh, Bassam
High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).
Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.
2002-01-01
An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sung, Yixing; Adams, Brian M.; Witkowski, Walter R.
2011-04-01
The CASL Level 2 Milestone VUQ.Y1.03, 'Enable statistical sensitivity and UQ demonstrations for VERA,' was successfully completed in March 2011. The VUQ focus area led this effort, in close partnership with AMA, and with support from VRI. DAKOTA was coupled to VIPRE-W thermal-hydraulics simulations representing reactors of interest to address crud-related challenge problems in order to understand the sensitivity and uncertainty in simulation outputs with respect to uncertain operating and model form parameters. This report summarizes work coupling the software tools, characterizing uncertainties, selecting sensitivity and uncertainty quantification algorithms, and analyzing the results of iterative studies. These demonstration studies focusedmore » on sensitivity and uncertainty of mass evaporation rate calculated by VIPRE-W, a key predictor for crud-induced power shift (CIPS).« less
Sensitivity Analysis Tailored to Constrain 21st Century Terrestrial Carbon-Uptake
NASA Astrophysics Data System (ADS)
Muller, S. J.; Gerber, S.
2013-12-01
The long-term fate of terrestrial carbon (C) in response to climate change remains a dominant source of uncertainty in Earth-system model projections. Increasing atmospheric CO2 could be mitigated by long-term net uptake of C, through processes such as increased plant productivity due to "CO2-fertilization". Conversely, atmospheric conditions could be exacerbated by long-term net release of C, through processes such as increased decomposition due to higher temperatures. This balance is an important area of study, and a major source of uncertainty in long-term (>year 2050) projections of planetary response to climate change. We present results from an innovative application of sensitivity analysis to LM3V, a dynamic global vegetation model (DGVM), intended to identify observed/observable variables that are useful for constraining long-term projections of C-uptake. We analyzed the sensitivity of cumulative C-uptake by 2100, as modeled by LM3V in response to IPCC AR4 scenario climate data (1860-2100), to perturbations in over 50 model parameters. We concurrently analyzed the sensitivity of over 100 observable model variables, during the extant record period (1970-2010), to the same parameter changes. By correlating the sensitivities of observable variables with the sensitivity of long-term C-uptake we identified model calibration variables that would also constrain long-term C-uptake projections. LM3V employs a coupled carbon-nitrogen cycle to account for N-limitation, and we find that N-related variables have an important role to play in constraining long-term C-uptake. This work has implications for prioritizing field campaigns to collect global data that can help reduce uncertainties in the long-term land-atmosphere C-balance. Though results of this study are specific to LM3V, the processes that characterize this model are not completely divorced from other DGVMs (or reality), and our approach provides valuable insights into how data can be leveraged to be better constrain projections for the land carbon sink.
Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models
NASA Astrophysics Data System (ADS)
Ardani, S.; Kaihatu, J. M.
2012-12-01
Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC
Adjoint-Based Implicit Uncertainty Analysis for Figures of Merit in a Laser Inertial Fusion Engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seifried, J E; Fratoni, M; Kramer, K J
A primary purpose of computational models is to inform design decisions and, in order to make those decisions reliably, the confidence in the results of such models must be estimated. Monte Carlo neutron transport models are common tools for reactor designers. These types of models contain several sources of uncertainty that propagate onto the model predictions. Two uncertainties worthy of note are (1) experimental and evaluation uncertainties of nuclear data that inform all neutron transport models and (2) statistical counting precision, which all results of a Monte Carlo codes contain. Adjoint-based implicit uncertainty analyses allow for the consideration of anymore » number of uncertain input quantities and their effects upon the confidence of figures of merit with only a handful of forward and adjoint transport calculations. When considering a rich set of uncertain inputs, adjoint-based methods remain hundreds of times more computationally efficient than Direct Monte-Carlo methods. The LIFE (Laser Inertial Fusion Energy) engine is a concept being developed at Lawrence Livermore National Laboratory. Various options exist for the LIFE blanket, depending on the mission of the design. The depleted uranium hybrid LIFE blanket design strives to close the fission fuel cycle without enrichment or reprocessing, while simultaneously achieving high discharge burnups with reduced proliferation concerns. Neutron transport results that are central to the operation of the design are tritium production for fusion fuel, fission of fissile isotopes for energy multiplication, and production of fissile isotopes for sustained power. In previous work, explicit cross-sectional uncertainty analyses were performed for reaction rates related to the figures of merit for the depleted uranium hybrid LIFE blanket. Counting precision was also quantified for both the figures of merit themselves and the cross-sectional uncertainty estimates to gauge the validity of the analysis. All cross-sectional uncertainties were small (0.1-0.8%), bounded counting uncertainties, and were precise with regard to counting precision. Adjoint/importance distributions were generated for the same reaction rates. The current work leverages those adjoint distributions to transition from explicit sensitivities, in which the neutron flux is constrained, to implicit sensitivities, in which the neutron flux responds to input perturbations. This treatment vastly expands the set of data that contribute to uncertainties to produce larger, more physically accurate uncertainty estimates.« less
Optimization Under Uncertainty for Electronics Cooling Design
NASA Astrophysics Data System (ADS)
Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.
Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...
NASA Astrophysics Data System (ADS)
Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Huth, N.; Marin, F.; Martiné, J.-F.
2014-01-01
Agro-Land Surface Models (agro-LSM) have been developed from the integration of specific crop processes into large-scale generic land surface models that allow calculating the spatial distribution and variability of energy, water and carbon fluxes within the soil-vegetation-atmosphere continuum. When developing agro-LSM models, a particular attention must be given to the effects of crop phenology and management on the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty of Agro-LSM models is related to their usually large number of parameters. In this study, we quantify the parameter-values uncertainty in the simulation of sugar cane biomass production with the agro-LSM ORCHIDEE-STICS, using a multi-regional approach with data from sites in Australia, La Réunion and Brazil. In ORCHIDEE-STICS, two models are chained: STICS, an agronomy model that calculates phenology and management, and ORCHIDEE, a land surface model that calculates biomass and other ecosystem variables forced by STICS' phenology. First, the parameters that dominate the uncertainty of simulated biomass at harvest date are determined through a screening of 67 different parameters of both STICS and ORCHIDEE on a multi-site basis. Secondly, the uncertainty of harvested biomass attributable to those most sensitive parameters is quantified and specifically attributed to either STICS (phenology, management) or to ORCHIDEE (other ecosystem variables including biomass) through distinct Monte-Carlo runs. The uncertainty on parameter values is constrained using observations by calibrating the model independently at seven sites. In a third step, a sensitivity analysis is carried out by varying the most sensitive parameters to investigate their effects at continental scale. A Monte-Carlo sampling method associated with the calculation of Partial Ranked Correlation Coefficients is used to quantify the sensitivity of harvested biomass to input parameters on a continental scale across the large regions of intensive sugar cane cultivation in Australia and Brazil. Ten parameters driving most of the uncertainty in the ORCHIDEE-STICS modeled biomass at the 7 sites are identified by the screening procedure. We found that the 10 most sensitive parameters control phenology (maximum rate of increase of LAI) and root uptake of water and nitrogen (root profile and root growth rate, nitrogen stress threshold) in STICS, and photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), and transpiration and respiration (stomatal conductance, growth and maintenance respiration coefficients) in ORCHIDEE. We find that the optimal carboxylation rate and photosynthesis temperature parameters contribute most to the uncertainty in harvested biomass simulations at site scale. The spatial variation of the ranked correlation between input parameters and modeled biomass at harvest is well explained by rain and temperature drivers, suggesting climate-mediated different sensitivities of modeled sugar cane yield to the model parameters, for Australia and Brazil. This study reveals the spatial and temporal patterns of uncertainty variability for a highly parameterized agro-LSM and calls for more systematic uncertainty analyses of such models.
NASA Astrophysics Data System (ADS)
Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Caubel, A.; Huth, N.; Marin, F.; Martiné, J.-F.
2014-06-01
Agro-land surface models (agro-LSM) have been developed from the integration of specific crop processes into large-scale generic land surface models that allow calculating the spatial distribution and variability of energy, water and carbon fluxes within the soil-vegetation-atmosphere continuum. When developing agro-LSM models, particular attention must be given to the effects of crop phenology and management on the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty of agro-LSM models is related to their usually large number of parameters. In this study, we quantify the parameter-values uncertainty in the simulation of sugarcane biomass production with the agro-LSM ORCHIDEE-STICS, using a multi-regional approach with data from sites in Australia, La Réunion and Brazil. In ORCHIDEE-STICS, two models are chained: STICS, an agronomy model that calculates phenology and management, and ORCHIDEE, a land surface model that calculates biomass and other ecosystem variables forced by STICS phenology. First, the parameters that dominate the uncertainty of simulated biomass at harvest date are determined through a screening of 67 different parameters of both STICS and ORCHIDEE on a multi-site basis. Secondly, the uncertainty of harvested biomass attributable to those most sensitive parameters is quantified and specifically attributed to either STICS (phenology, management) or to ORCHIDEE (other ecosystem variables including biomass) through distinct Monte Carlo runs. The uncertainty on parameter values is constrained using observations by calibrating the model independently at seven sites. In a third step, a sensitivity analysis is carried out by varying the most sensitive parameters to investigate their effects at continental scale. A Monte Carlo sampling method associated with the calculation of partial ranked correlation coefficients is used to quantify the sensitivity of harvested biomass to input parameters on a continental scale across the large regions of intensive sugarcane cultivation in Australia and Brazil. The ten parameters driving most of the uncertainty in the ORCHIDEE-STICS modeled biomass at the 7 sites are identified by the screening procedure. We found that the 10 most sensitive parameters control phenology (maximum rate of increase of LAI) and root uptake of water and nitrogen (root profile and root growth rate, nitrogen stress threshold) in STICS, and photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), and transpiration and respiration (stomatal conductance, growth and maintenance respiration coefficients) in ORCHIDEE. We find that the optimal carboxylation rate and photosynthesis temperature parameters contribute most to the uncertainty in harvested biomass simulations at site scale. The spatial variation of the ranked correlation between input parameters and modeled biomass at harvest is well explained by rain and temperature drivers, suggesting different climate-mediated sensitivities of modeled sugarcane yield to the model parameters, for Australia and Brazil. This study reveals the spatial and temporal patterns of uncertainty variability for a highly parameterized agro-LSM and calls for more systematic uncertainty analyses of such models.
Extracting falsifiable predictions from sloppy models.
Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P
2007-12-01
Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.
Sensitivity Analysis for Studying Impacts of Aging on Population Toxicokinetics and Toxicodynamics
Assessing the impacts of toxicant exposures upon susceptible populations such as the elderly requires adequate characterization of prior long-term exposures, reductions in various organ functions, and potential intake of multiple drugs. Additionally, significant uncertainties and...
The critical role of uncertainty in projections of hydrological extremes
NASA Astrophysics Data System (ADS)
Meresa, Hadush K.; Romanowicz, Renata J.
2017-08-01
This paper aims to quantify the uncertainty in projections of future hydrological extremes in the Biala Tarnowska River at Koszyce gauging station, south Poland. The approach followed is based on several climate projections obtained from the EURO-CORDEX initiative, raw and bias-corrected realizations of catchment precipitation, and flow simulations derived using multiple hydrological model parameter sets. The projections cover the 21st century. Three sources of uncertainty are considered: one related to climate projection ensemble spread, the second related to the uncertainty in hydrological model parameters and the third related to the error in fitting theoretical distribution models to annual extreme flow series. The uncertainty of projected extreme indices related to hydrological model parameters was conditioned on flow observations from the reference period using the generalized likelihood uncertainty estimation (GLUE) approach, with separate criteria for high- and low-flow extremes. Extreme (low and high) flow quantiles were estimated using the generalized extreme value (GEV) distribution at different return periods and were based on two different lengths of the flow time series. A sensitivity analysis based on the analysis of variance (ANOVA) shows that the uncertainty introduced by the hydrological model parameters can be larger than the climate model variability and the distribution fit uncertainty for the low-flow extremes whilst for the high-flow extremes higher uncertainty is observed from climate models than from hydrological parameter and distribution fit uncertainties. This implies that ignoring one of the three uncertainty sources may cause great risk to future hydrological extreme adaptations and water resource planning and management.
Dieye, A.M.; Roy, David P.; Hanan, N.P.; Liu, S.; Hansen, M.; Toure, A.
2012-01-01
Spatially explicit land cover land use (LCLU) change information is needed to drive biogeochemical models that simulate soil organic carbon (SOC) dynamics. Such information is increasingly being mapped using remotely sensed satellite data with classification schemes and uncertainties constrained by the sensing system, classification algorithms and land cover schemes. In this study, automated LCLU classification of multi-temporal Landsat satellite data were used to assess the sensitivity of SOC modeled by the Global Ensemble Biogeochemical Modeling System (GEMS). The GEMS was run for an area of 1560 km2 in Senegal under three climate change scenarios with LCLU maps generated using different Landsat classification approaches. This research provides a method to estimate the variability of SOC, specifically the SOC uncertainty due to satellite classification errors, which we show is dependent not only on the LCLU classification errors but also on where the LCLU classes occur relative to the other GEMS model inputs.
NASA Astrophysics Data System (ADS)
Kestens, Vikram; Bozatzidis, Vassili; De Temmerman, Pieter-Jan; Ramaye, Yannic; Roebben, Gert
2017-08-01
Particle tracking analysis (PTA) is an emerging technique suitable for size analysis of particles with external dimensions in the nano- and sub-micrometre scale range. Only limited attempts have so far been made to investigate and quantify the performance of the PTA method for particle size analysis. This article presents the results of a validation study during which selected colloidal silica and polystyrene latex reference materials with particle sizes in the range of 20 nm to 200 nm were analysed with NS500 and LM10-HSBF NanoSight instruments and video analysis software NTA 2.3 and NTA 3.0. Key performance characteristics such as working range, linearity, limit of detection, limit of quantification, sensitivity, robustness, precision and trueness were examined according to recommendations proposed by EURACHEM. A model for measurement uncertainty estimation following the principles described in ISO/IEC Guide 98-3 was used for quantifying random and systematic variations. For nominal 50 nm and 100 nm polystyrene and a nominal 80 nm silica reference materials, the relative expanded measurement uncertainties for the three measurands of interest, being the mode, median and arithmetic mean of the number-weighted particle size distribution, varied from about 10% to 12%. For the nominal 50 nm polystyrene material, the relative expanded uncertainty of the arithmetic mean of the particle size distributions increased up to 18% which was due to the presence of agglomerates. Data analysis was performed with software NTA 2.3 and NTA 3.0. The latter showed to be superior in terms of sensitivity and resolution.
A Python Interface for the Dakota Iterative Systems Analysis Toolkit
NASA Astrophysics Data System (ADS)
Piper, M.; Hutton, E.; Syvitski, J. P.
2016-12-01
Uncertainty quantification is required to improve the accuracy, reliability, and accountability of Earth science models. Dakota is a software toolkit, developed at Sandia National Laboratories, that provides an interface between models and a library of analysis methods, including support for sensitivity analysis, uncertainty quantification, optimization, and calibration techniques. Dakota is a powerful tool, but its learning curve is steep: the user not only must understand the structure and syntax of the Dakota input file, but also must develop intermediate code, called an analysis driver, that allows Dakota to run a model. The CSDMS Dakota interface (CDI) is a Python package that wraps and extends Dakota's user interface. It simplifies the process of configuring and running a Dakota experiment. A user can program to the CDI, allowing a Dakota experiment to be scripted. The CDI creates Dakota input files and provides a generic analysis driver. Any model written in Python that exposes a Basic Model Interface (BMI), as well as any model componentized in the CSDMS modeling framework, automatically works with the CDI. The CDI has a plugin architecture, so models written in other languages, or those that don't expose a BMI, can be accessed by the CDI by programmatically extending a template; an example is provided in the CDI distribution. Currently, six Dakota analysis methods have been implemented for examples from the much larger Dakota library. To demonstrate the CDI, we performed an uncertainty quantification experiment with the HydroTrend hydrological water balance and transport model. In the experiment, we evaluated the response of long-term suspended sediment load at the river mouth (Qs) to uncertainty in two input parameters, annual mean temperature (T) and precipitation (P), over a series of 100-year runs, using the polynomial chaos method. Through Dakota, we calculated moments, local and global (Sobol') sensitivity indices, and probability density and cumulative distribution functions for the response.
NASA Astrophysics Data System (ADS)
Kumar, V.; Nayagum, D.; Thornton, S.; Banwart, S.; Schuhmacher2, M.; Lerner, D.
2006-12-01
Characterization of uncertainty associated with groundwater quality models is often of critical importance, as for example in cases where environmental models are employed in risk assessment. Insufficient data, inherent variability and estimation errors of environmental model parameters introduce uncertainty into model predictions. However, uncertainty analysis using conventional methods such as standard Monte Carlo sampling (MCS) may not be efficient, or even suitable, for complex, computationally demanding models and involving different nature of parametric variability and uncertainty. General MCS or variant of MCS such as Latin Hypercube Sampling (LHS) assumes variability and uncertainty as a single random entity and the generated samples are treated as crisp assuming vagueness as randomness. Also when the models are used as purely predictive tools, uncertainty and variability lead to the need for assessment of the plausible range of model outputs. An improved systematic variability and uncertainty analysis can provide insight into the level of confidence in model estimates, and can aid in assessing how various possible model estimates should be weighed. The present study aims to introduce, Fuzzy Latin Hypercube Sampling (FLHS), a hybrid approach of incorporating cognitive and noncognitive uncertainties. The noncognitive uncertainty such as physical randomness, statistical uncertainty due to limited information, etc can be described by its own probability density function (PDF); whereas the cognitive uncertainty such estimation error etc can be described by the membership function for its fuzziness and confidence interval by ?-cuts. An important property of this theory is its ability to merge inexact generated data of LHS approach to increase the quality of information. The FLHS technique ensures that the entire range of each variable is sampled with proper incorporation of uncertainty and variability. A fuzzified statistical summary of the model results will produce indices of sensitivity and uncertainty that relate the effects of heterogeneity and uncertainty of input variables to model predictions. The feasibility of the method is validated to assess uncertainty propagation of parameter values for estimation of the contamination level of a drinking water supply well due to transport of dissolved phenolics from a contaminated site in the UK.
Case studies in Bayesian microbial risk assessments.
Kennedy, Marc C; Clough, Helen E; Turner, Joanne
2009-12-21
The quantification of uncertainty and variability is a key component of quantitative risk analysis. Recent advances in Bayesian statistics make it ideal for integrating multiple sources of information, of different types and quality, and providing a realistic estimate of the combined uncertainty in the final risk estimates. We present two case studies related to foodborne microbial risks. In the first, we combine models to describe the sequence of events resulting in illness from consumption of milk contaminated with VTEC O157. We used Monte Carlo simulation to propagate uncertainty in some of the inputs to computer models describing the farm and pasteurisation process. Resulting simulated contamination levels were then assigned to consumption events from a dietary survey. Finally we accounted for uncertainty in the dose-response relationship and uncertainty due to limited incidence data to derive uncertainty about yearly incidences of illness in young children. Options for altering the risk were considered by running the model with different hypothetical policy-driven exposure scenarios. In the second case study we illustrate an efficient Bayesian sensitivity analysis for identifying the most important parameters of a complex computer code that simulated VTEC O157 prevalence within a managed dairy herd. This was carried out in 2 stages, first to screen out the unimportant inputs, then to perform a more detailed analysis on the remaining inputs. The method works by building a Bayesian statistical approximation to the computer code using a number of known code input/output pairs (training runs). We estimated that the expected total number of children aged 1.5-4.5 who become ill due to VTEC O157 in milk is 8.6 per year, with 95% uncertainty interval (0,11.5). The most extreme policy we considered was banning on-farm pasteurisation of milk, which reduced the estimate to 6.4 with 95% interval (0,11). In the second case study the effective number of inputs was reduced from 30 to 7 in the screening stage, and just 2 inputs were found to explain 82.8% of the output variance. A combined total of 500 runs of the computer code were used. These case studies illustrate the use of Bayesian statistics to perform detailed uncertainty and sensitivity analyses, integrating multiple information sources in a way that is both rigorous and efficient.
Benchmark On Sensitivity Calculation (Phase III)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanova, Tatiana; Laville, Cedric; Dyrda, James
2012-01-01
The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impactmore » the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.« less
NASA Astrophysics Data System (ADS)
Zell, Wesley O.; Culver, Teresa B.; Sanford, Ward E.
2018-06-01
Uncertainties about the age of base-flow discharge can have serious implications for the management of degraded environmental systems where subsurface pathways, and the ongoing release of pollutants that accumulated in the subsurface during past decades, dominate the water quality signal. Numerical groundwater models may be used to estimate groundwater return times and base-flow ages and thus predict the time required for stakeholders to see the results of improved agricultural management practices. However, the uncertainty inherent in the relationship between (i) the observations of atmospherically-derived tracers that are required to calibrate such models and (ii) the predictions of system age that the observations inform have not been investigated. For example, few if any studies have assessed the uncertainty of numerically-simulated system ages or evaluated the uncertainty reductions that may result from the expense of collecting additional subsurface tracer data. In this study we combine numerical flow and transport modeling of atmospherically-derived tracers with prediction uncertainty methods to accomplish four objectives. First, we show the relative importance of head, discharge, and tracer information for characterizing response times in a uniquely data rich catchment that includes 266 age-tracer measurements (SF6, CFCs, and 3H) in addition to long term monitoring of water levels and stream discharge. Second, we calculate uncertainty intervals for model-simulated base-flow ages using both linear and non-linear methods, and find that the prediction sensitivity vector used by linear first-order second-moment methods results in much larger uncertainties than non-linear Monte Carlo methods operating on the same parameter uncertainty. Third, by combining prediction uncertainty analysis with multiple models of the system, we show that data-worth calculations and monitoring network design are sensitive to variations in the amount of water leaving the system via stream discharge and irrigation withdrawals. Finally, we demonstrate a novel model-averaged computation of potential data worth that can account for these uncertainties in model structure.
Not simply more of the same: distinguishing between patient heterogeneity and parameter uncertainty.
Vemer, Pepijn; Goossens, Lucas M A; Rutten-van Mölken, Maureen P M H
2014-11-01
In cost-effectiveness (CE) Markov models, heterogeneity in the patient population is not automatically taken into account. We aimed to compare methods of dealing with heterogeneity on estimates of CE, using a case study in chronic obstructive pulmonary disease (COPD). We first present a probabilistic sensitivity analysis (PSA) in which we sampled only from distributions representing parameter uncertainty. This ignores any heterogeneity. Next, we explored heterogeneity by presenting results for subgroups, using a method that samples parameter uncertainty simultaneously with heterogeneity in a single-loop PSA. Finally, we distinguished parameter uncertainty from heterogeneity in a double-loop PSA by performing a nested simulation within each PSA iteration. Point estimates and uncertainty differed substantially between methods. The incremental CE ratio (ICER) ranged from € 4900 to € 13,800. The single-loop PSA led to a substantially different shape of the CE plane and an overestimation of the uncertainty compared with the other 3 methods. The CE plane for the double-loop PSA showed substantially less uncertainty and a stronger negative correlation between the difference in costs and the difference in effects compared with the other methods. This came at the cost of higher calculation times. Not accounting for heterogeneity, subgroup analysis and the double-loop PSA can be viable options, depending on the decision makers' information needs. The single-loop PSA should not be used in CE research. It disregards the fundamental differences between heterogeneity and sampling uncertainty and overestimates uncertainty as a result. © The Author(s) 2014.
Methods for Probabilistic Radiological Dose Assessment at a High-Level Radioactive Waste Repository.
NASA Astrophysics Data System (ADS)
Maheras, Steven James
Methods were developed to assess and evaluate the uncertainty in offsite and onsite radiological dose at a high-level radioactive waste repository to show reasonable assurance that compliance with applicable regulatory requirements will be achieved. Uncertainty in offsite dose was assessed by employing a stochastic precode in conjunction with Monte Carlo simulation using an offsite radiological dose assessment code. Uncertainty in onsite dose was assessed by employing a discrete-event simulation model of repository operations in conjunction with an occupational radiological dose assessment model. Complementary cumulative distribution functions of offsite and onsite dose were used to illustrate reasonable assurance. Offsite dose analyses were performed for iodine -129, cesium-137, strontium-90, and plutonium-239. Complementary cumulative distribution functions of offsite dose were constructed; offsite dose was lognormally distributed with a two order of magnitude range. However, plutonium-239 results were not lognormally distributed and exhibited less than one order of magnitude range. Onsite dose analyses were performed for the preliminary inspection, receiving and handling, and the underground areas of the repository. Complementary cumulative distribution functions of onsite dose were constructed and exhibited less than one order of magnitude range. A preliminary sensitivity analysis of the receiving and handling areas was conducted using a regression metamodel. Sensitivity coefficients and partial correlation coefficients were used as measures of sensitivity. Model output was most sensitive to parameters related to cask handling operations. Model output showed little sensitivity to parameters related to cask inspections.
REDD+ emissions estimation and reporting: dealing with uncertainty
NASA Astrophysics Data System (ADS)
Pelletier, Johanne; Martin, Davy; Potvin, Catherine
2013-09-01
The United Nations Framework Convention on Climate Change (UNFCCC) defined the technical and financial modalities of policy approaches and incentives to reduce emissions from deforestation and forest degradation in developing countries (REDD+). Substantial technical challenges hinder precise and accurate estimation of forest-related emissions and removals, as well as the setting and assessment of reference levels. These challenges could limit country participation in REDD+, especially if REDD+ emission reductions were to meet quality standards required to serve as compliance grade offsets for developed countries’ emissions. Using Panama as a case study, we tested the matrix approach proposed by Bucki et al (2012 Environ. Res. Lett. 7 024005) to perform sensitivity and uncertainty analysis distinguishing between ‘modelling sources’ of uncertainty, which refers to model-specific parameters and assumptions, and ‘recurring sources’ of uncertainty, which refers to random and systematic errors in emission factors and activity data. The sensitivity analysis estimated differences in the resulting fluxes ranging from 4.2% to 262.2% of the reference emission level. The classification of fallows and the carbon stock increment or carbon accumulation of intact forest lands were the two key parameters showing the largest sensitivity. The highest error propagated using Monte Carlo simulations was caused by modelling sources of uncertainty, which calls for special attention to ensure consistency in REDD+ reporting which is essential for securing environmental integrity. Due to the role of these modelling sources of uncertainty, the adoption of strict rules for estimation and reporting would favour comparability of emission reductions between countries. We believe that a reduction of the bias in emission factors will arise, among other things, from a globally concerted effort to improve allometric equations for tropical forests. Public access to datasets and methodology used to evaluate reference level and emission reductions would strengthen the credibility of the system by promoting accountability and transparency. To secure conservativeness and deal with uncertainty, we consider the need for further research using real data available to developing countries to test the applicability of conservative discounts including the trend uncertainty and other possible options that would allow real incentives and stimulate improvements over time. Finally, we argue that REDD+ result-based actions assessed on the basis of a dashboard of performance indicators, not only in ‘tonnes CO2 equ. per year’ might provide a more holistic approach, at least until better accuracy and certainty of forest carbon stocks emission and removal estimates to support a REDD+ policy can be reached.
Arino, Yosuke; Akimoto, Keigo; Sano, Fuminori; Homma, Takashi; Oda, Junichiro; Tomoda, Toshimasa
2016-05-24
Although solar radiation management (SRM) might play a role as an emergency geoengineering measure, its potential risks remain uncertain, and hence there are ethical and governance issues in the face of SRM's actual deployment. By using an integrated assessment model, we first present one possible methodology for evaluating the value arising from retaining an SRM option given the uncertainty of climate sensitivity, and also examine sensitivities of the option value to SRM's side effects (damages). Reflecting the governance challenges on immediate SRM deployment, we assume scenarios in which SRM could only be deployed with a limited degree of cooling (0.5 °C) only after 2050, when climate sensitivity uncertainty is assumed to be resolved and only when the sensitivity is found to be high (T2x = 4 °C). We conduct a cost-effectiveness analysis with constraining temperature rise as the objective. The SRM option value is originated from its rapid cooling capability that would alleviate the mitigation requirement under climate sensitivity uncertainty and thereby reduce mitigation costs. According to our estimates, the option value during 1990-2049 for a +2.4 °C target (the lowest temperature target level for which there were feasible solutions in this model study) relative to preindustrial levels were in the range between $2.5 and $5.9 trillion, taking into account the maximum level of side effects shown in the existing literature. The result indicates that lower limits of the option values for temperature targets below +2.4 °C would be greater than $2.5 trillion.
Arino, Yosuke; Akimoto, Keigo; Sano, Fuminori; Homma, Takashi; Oda, Junichiro; Tomoda, Toshimasa
2016-01-01
Although solar radiation management (SRM) might play a role as an emergency geoengineering measure, its potential risks remain uncertain, and hence there are ethical and governance issues in the face of SRM’s actual deployment. By using an integrated assessment model, we first present one possible methodology for evaluating the value arising from retaining an SRM option given the uncertainty of climate sensitivity, and also examine sensitivities of the option value to SRM’s side effects (damages). Reflecting the governance challenges on immediate SRM deployment, we assume scenarios in which SRM could only be deployed with a limited degree of cooling (0.5 °C) only after 2050, when climate sensitivity uncertainty is assumed to be resolved and only when the sensitivity is found to be high (T2x = 4 °C). We conduct a cost-effectiveness analysis with constraining temperature rise as the objective. The SRM option value is originated from its rapid cooling capability that would alleviate the mitigation requirement under climate sensitivity uncertainty and thereby reduce mitigation costs. According to our estimates, the option value during 1990–2049 for a +2.4 °C target (the lowest temperature target level for which there were feasible solutions in this model study) relative to preindustrial levels were in the range between $2.5 and $5.9 trillion, taking into account the maximum level of side effects shown in the existing literature. The result indicates that lower limits of the option values for temperature targets below +2.4 °C would be greater than $2.5 trillion. PMID:27162346
Nuclear Data Needs for the Neutronic Design of MYRRHA Fast Spectrum Research Reactor
NASA Astrophysics Data System (ADS)
Stankovskiy, A.; Malambu, E.; Van den Eynde, G.; Díez, C. J.
2014-04-01
A global sensitivity analysis of effective neutron multiplication factor to the change of nuclear data library has been performed. It revealed that the test version of JEFF-3.2 neutron-induced evaluated data library produces closer results to ENDF/B-VII.1 than JEFF-3.1.2 does. The analysis of contributions of individual evaluations into keff sensitivity resulted in the priority list of nuclides, uncertainties on cross sections and fission neutron multiplicities of which have to be improved by setting up dedicated differential and integral experiments.
ERIC Educational Resources Information Center
Neil, Louise; Olsson, Nora Choque; Pellicano, Elizabeth
2016-01-01
Guided by a recent theory that proposes fundamental differences in how autistic individuals deal with uncertainty, we investigated the extent to which the cognitive construct "intolerance of uncertainty" and anxiety were related to parental reports of sensory sensitivities in 64 autistic and 85 typically developing children aged…
Accuracy and sensitivity analysis on seismic anisotropy parameter estimation
NASA Astrophysics Data System (ADS)
Yan, Fuyong; Han, De-Hua
2018-04-01
There is significant uncertainty in measuring the Thomsen’s parameter δ in laboratory even though the dimensions and orientations of the rock samples are known. It is expected that more challenges will be encountered in the estimating of the seismic anisotropy parameters from field seismic data. Based on Monte Carlo simulation of vertical transversely isotropic layer cake model using the database of laboratory anisotropy measurement from the literature, we apply the commonly used quartic non-hyperbolic reflection moveout equation to estimate the seismic anisotropy parameters and test its accuracy and sensitivities to the source-receive offset, vertical interval velocity error and time picking error. The testing results show that the methodology works perfectly for noise-free synthetic data with short spread length. However, this method is extremely sensitive to the time picking error caused by mild random noises, and it requires the spread length to be greater than the depth of the reflection event. The uncertainties increase rapidly for the deeper layers and the estimated anisotropy parameters can be very unreliable for a layer with more than five overlain layers. It is possible that an isotropic formation can be misinterpreted as a strong anisotropic formation. The sensitivity analysis should provide useful guidance on how to group the reflection events and build a suitable geological model for anisotropy parameter inversion.
Large uncertainty in permafrost carbon stocks due to hillslope soil deposits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shelef, Eitan; Rowland, Joel C.; Wilson, Cathy J.
Here, northern circumpolar permafrost soils contain more than a third of the global Soil Organic Carbon pool (SOC). The sensitivity of this carbon pool to a changing climate is a primary source of uncertainty in simulationbased climate projections. These projections, however, do not account for the accumulation of soil deposits at the base of hillslopes (hill-toes), and the influence of this accumulation on the distribution, sequestration, and decomposition of SOC in landscapes affected by permafrost. Here we combine topographic models with soil-profile data and topographic analysis to evaluate the quantity and uncertainty of SOC mass stored in perennially frozen hill-toemore » soil deposits. We show that in Alaska this SOC mass introduces an uncertainty that is > 200% than state-wide estimates of SOC stocks (77 PgC), and that a similarly large uncertainty may also pertain at a circumpolar scale. Soil sampling and geophysical-imaging efforts that target hill-toe deposits can help constrain this large uncertainty.« less
Large uncertainty in permafrost carbon stocks due to hillslope soil deposits
Shelef, Eitan; Rowland, Joel C.; Wilson, Cathy J.; ...
2017-05-31
Here, northern circumpolar permafrost soils contain more than a third of the global Soil Organic Carbon pool (SOC). The sensitivity of this carbon pool to a changing climate is a primary source of uncertainty in simulationbased climate projections. These projections, however, do not account for the accumulation of soil deposits at the base of hillslopes (hill-toes), and the influence of this accumulation on the distribution, sequestration, and decomposition of SOC in landscapes affected by permafrost. Here we combine topographic models with soil-profile data and topographic analysis to evaluate the quantity and uncertainty of SOC mass stored in perennially frozen hill-toemore » soil deposits. We show that in Alaska this SOC mass introduces an uncertainty that is > 200% than state-wide estimates of SOC stocks (77 PgC), and that a similarly large uncertainty may also pertain at a circumpolar scale. Soil sampling and geophysical-imaging efforts that target hill-toe deposits can help constrain this large uncertainty.« less
Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling
NASA Technical Reports Server (NTRS)
Hojnicki, Jeffrey S.; Rusick, Jeffrey J.
2005-01-01
Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).
Water quality modeling for urban reach of Yamuna river, India (1999-2009), using QUAL2Kw
NASA Astrophysics Data System (ADS)
Sharma, Deepshikha; Kansal, Arun; Pelletier, Greg
2017-06-01
The study was to characterize and understand the water quality of the river Yamuna in Delhi (India) prior to an efficient restoration plan. A combination of collection of monitored data, mathematical modeling, sensitivity, and uncertainty analysis has been done using the QUAL2Kw, a river quality model. The model was applied to simulate DO, BOD, total coliform, and total nitrogen at four monitoring stations, namely Palla, Old Delhi Railway Bridge, Nizamuddin, and Okhla for 10 years (October 1999-June 2009) excluding the monsoon seasons (July-September). The study period was divided into two parts: monthly average data from October 1999-June 2004 (45 months) were used to calibrate the model and monthly average data from October 2005-June 2009 (45 months) were used to validate the model. The R2 for CBODf and TN lies within the range of 0.53-0.75 and 0.68-0.83, respectively. This shows that the model has given satisfactory results in terms of R2 for CBODf, TN, and TC. Sensitivity analysis showed that DO, CBODf, TN, and TC predictions are highly sensitive toward headwater flow and point source flow and quality. Uncertainty analysis using Monte Carlo showed that the input data have been simulated in accordance with the prevalent river conditions.
Development, sensitivity and uncertainty analysis of LASH model
USDA-ARS?s Scientific Manuscript database
Many hydrologic models have been developed to help manage natural resources all over the world. Nevertheless, most models have presented a high complexity regarding data base requirements, as well as, many calibration parameters. This has brought serious difficulties for applying them in watersheds ...
NASA Astrophysics Data System (ADS)
Sherman, James P.; McComiskey, Allison
2018-03-01
Aerosol optical properties measured at Appalachian State University's co-located NASA AERONET and NOAA ESRL aerosol network monitoring sites over a nearly four-year period (June 2012-Feb 2016) are used, along with satellite-based surface reflectance measurements, to study the seasonal variability of diurnally averaged clear sky aerosol direct radiative effect (DRE) and radiative efficiency (RE) at the top-of-atmosphere (TOA) and at the surface. Aerosol chemistry and loading at the Appalachian State site are likely representative of the background southeast US (SE US), home to high summertime aerosol loading and one of only a few regions not to have warmed during the 20th century. This study is the first multi-year ground truth
DRE study in the SE US, using aerosol network data products that are often used to validate satellite-based aerosol retrievals. The study is also the first in the SE US to quantify DRE uncertainties and sensitivities to aerosol optical properties and surface reflectance, including their seasonal dependence.Median DRE for the study period is -2.9 W m-2 at the TOA and -6.1 W m-2 at the surface. Monthly median and monthly mean DRE at the TOA (surface) are -1 to -2 W m-2 (-2 to -3 W m-2) during winter months and -5 to -6 W m-2 (-10 W m-2) during summer months. The DRE cycles follow the annual cycle of aerosol optical depth (AOD), which is 9 to 10 times larger in summer than in winter. Aerosol RE is anti-correlated with DRE, with winter values 1.5 to 2 times more negative than summer values. Due to the large seasonal dependence of aerosol DRE and RE, we quantify the sensitivity of DRE to aerosol optical properties and surface reflectance, using a calendar day representative of each season (21 December for winter; 21 March for spring, 21 June for summer, and 21 September for fall). We use these sensitivities along with measurement uncertainties of aerosol optical properties and surface reflectance to calculate DRE uncertainties. We also estimate uncertainty in calculated diurnally-averaged DRE due to diurnal aerosol variability. Aerosol DRE at both the TOA and surface is most sensitive to changes in AOD, followed by single-scattering albedo (ω0). One exception is under the high summertime aerosol loading conditions (AOD ≥ 0.15 at 550 nm), when sensitivity of TOA DRE to ω0 is comparable to that of AOD. Aerosol DRE is less sensitive to changes in scattering asymmetry parameter (g) and surface reflectance (R). While DRE sensitivity to AOD varies by only ˜ 25 to 30 % with season, DRE sensitivity to ω0, g, and R largely follow the annual AOD cycle at APP, varying by factors of 8 to 15 with season. Since the measurement uncertainties of AOD, ω0, g, and R are comparable at Appalachian State, their relative contributions to DRE uncertainty are largely influenced by their (seasonally dependent) DRE sensitivity values, which suggests that the seasonal dependence of DRE uncertainty must be accounted for. Clear sky aerosol DRE uncertainty at the TOA (surface) due to measurement uncertainties ranges from 0.45 (0.75 W m-2) for December to 1.1 (1.6 W m-2) for June. Expressed as a fraction of DRE computed using monthly median aerosol optical properties and surface reflectance, the DRE uncertainties at TOA (surface) are 20 to 24 % (15 to 22 %) for March, June, and September and 49 (50 %) for DEC. The relatively low DRE uncertainties are largely due to the low uncertainty in AOD measured by AERONET. Use of satellite-based AOD measurements by MODIS in the DRE calculations increases DRE uncertainties by a factor of 2 to 5 and DRE uncertainties are dominated by AOD uncertainty for all seasons. Diurnal variability in AOD (and to a lesser extent g) contributes to uncertainties in DRE calculated using daily-averaged aerosol optical properties that are slightly larger (by ˜ 20 to 30 %) than DRE uncertainties due to measurement uncertainties during summer and fall, with comparable uncertainties during winter and spring.
Topology optimization under stochastic stiffness
NASA Astrophysics Data System (ADS)
Asadpoure, Alireza
Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations for the response quantities allow for efficient and accurate calculation of sensitivities of response statistics with respect to the design variables. The proposed methods are shown to be successful at generating robust optimal topologies. Examples from topology optimization in continuum and discrete domains (truss structures) under uncertainty are presented. It is also shown that proposed methods lead to significant computational savings when compared to Monte Carlo-based optimization which involve multiple formations and inversions of the global stiffness matrix and that results obtained from the proposed method are in excellent agreement with those obtained from a Monte Carlo-based optimization algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harding, R., E-mail: ruth.harding2@wales.nhs.uk; Trnková, P.; Lomax, A. J.
Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was tomore » benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.« less
Finite Element Model Calibration Approach for Area I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Finite Element Model Calibration Approach for Ares I-X
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.
2010-01-01
Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.
Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...
Parameter Uncertainty Analysis Using Monte Carlo Simulations for a Regional-Scale Groundwater Model
NASA Astrophysics Data System (ADS)
Zhang, Y.; Pohlmann, K.
2016-12-01
Regional-scale grid-based groundwater models for flow and transport often contain multiple types of parameters that can intensify the challenge of parameter uncertainty analysis. We propose a Monte Carlo approach to systematically quantify the influence of various types of model parameters on groundwater flux and contaminant travel times. The Monte Carlo simulations were conducted based on the steady-state conversion of the original transient model, which was then combined with the PEST sensitivity analysis tool SENSAN and particle tracking software MODPATH. Results identified hydrogeologic units whose hydraulic conductivity can significantly affect groundwater flux, and thirteen out of 173 model parameters that can cause large variation in travel times for contaminant particles originating from given source zones.
Probabilistic Aeroelastic Analysis of Turbomachinery Components
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Mital, S. K.; Stefko, G. L.
2004-01-01
A probabilistic approach is described for aeroelastic analysis of turbomachinery blade rows. Blade rows with subsonic flow and blade rows with supersonic flow with subsonic leading edge are considered. To demonstrate the probabilistic approach, the flutter frequency, damping and forced response of a blade row representing a compressor geometry is considered. The analysis accounts for uncertainties in structural and aerodynamic design variables. The results are presented in the form of probabilistic density function (PDF) and sensitivity factors. For subsonic flow cascade, comparisons are also made with different probabilistic distributions, probabilistic methods, and Monte-Carlo simulation. The approach shows that the probabilistic approach provides a more realistic and systematic way to assess the effect of uncertainties in design variables on the aeroelastic instabilities and response.
NASA Astrophysics Data System (ADS)
Sun, Guodong; Mu, Mu
2016-04-01
An important source of uncertainty, which then causes further uncertainty in numerical simulations, is that residing in the parameters describing physical processes in numerical models. There are many physical parameters in numerical models in the atmospheric and oceanic sciences, and it would cost a great deal to reduce uncertainties in all physical parameters. Therefore, finding a subset of these parameters, which are relatively more sensitive and important parameters, and reducing the errors in the physical parameters in this subset would be a far more efficient way to reduce the uncertainties involved in simulations. In this context, we present a new approach based on the conditional nonlinear optimal perturbation related to parameter (CNOP-P) method. The approach provides a framework to ascertain the subset of those relatively more sensitive and important parameters among the physical parameters. The Lund-Potsdam-Jena (LPJ) dynamical global vegetation model was utilized to test the validity of the new approach. The results imply that nonlinear interactions among parameters play a key role in the uncertainty of numerical simulations in arid and semi-arid regions of China compared to those in northern, northeastern and southern China. The uncertainties in the numerical simulations were reduced considerably by reducing the errors of the subset of relatively more sensitive and important parameters. The results demonstrate that our approach not only offers a new route to identify relatively more sensitive and important physical parameters but also that it is viable to then apply "target observations" to reduce the uncertainties in model parameters.
The art of maturity modeling. Part 2. Alternative models and sensitivity analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waples, D.W.; Suizu, Masahiro; Kamata, Hiromi
1992-01-01
The sensitivity of exploration decisions to variations in several input parameters for maturity modeling was examined for the MITI Rumoi well, Hokkaido, Japan. Decisions were almost completely insensitive to uncertainties about formation age and erosional removal across some unconformities, but were more sensitive to changes in removal during unconformities which occurred near maximum paleotemperatures. Exploration decisions were not very sensitive to the choice of a particular kinetic model for hydrocarbon generation. Uncertainties in kerogen type and the kinetics of different kerogen types are more serious than differences among the various kinetic models. Results of modeling using the TTI method weremore » unsatisfactory. Thermal history and timing and amount of hydrocarbon generation estimated or calculated using the TTI method were greatly different from those obtained using a purely kinetic model. The authors strongly recommend use of the kinetic R{sub o} method instead of the TTI method. If they had lacked measured R{sub o} data, subsurface temperature data, or both, their confidence in the modeling results would have been sharply reduced. Conceptual models for predicting heat flow and thermal conductivity are simply too weak at present to allow one to carry out highly meaningful modeling unless the input is constrained by measured data. Maturity modeling therefore requires the use of more, not fewer, measured temperature and maturity data. The use of sensitivity analysis in maturity modeling is very important for understanding the geologic system, for knowing what level of confidence to place on the results, and for determining what new types of data would be most necessary to improve confidence. Sensitivity analysis can be carried out easily using a rapid, interactive maturity-modeling program.« less
Hall, Sheldon K.; Ooi, Ean H.; Payne, Stephen J.
2015-01-01
Abstract Purpose: A sensitivity analysis has been performed on a mathematical model of radiofrequency ablation (RFA) in the liver. The purpose of this is to identify the most important parameters in the model, defined as those that produce the largest changes in the prediction. This is important in understanding the role of uncertainty and when comparing the model predictions to experimental data. Materials and methods: The Morris method was chosen to perform the sensitivity analysis because it is ideal for models with many parameters or that take a significant length of time to obtain solutions. A comprehensive literature review was performed to obtain ranges over which the model parameters are expected to vary, crucial input information. Results: The most important parameters in predicting the ablation zone size in our model of RFA are those representing the blood perfusion, electrical conductivity and the cell death model. The size of the 50 °C isotherm is sensitive to the electrical properties of tissue while the heat source is active, and to the thermal parameters during cooling. Conclusions: The parameter ranges chosen for the sensitivity analysis are believed to represent all that is currently known about their values in combination. The Morris method is able to compute global parameter sensitivities taking into account the interaction of all parameters, something that has not been done before. Research is needed to better understand the uncertainties in the cell death, electrical conductivity and perfusion models, but the other parameters are only of second order, providing a significant simplification. PMID:26000972
Validation and uncertainty analysis of a pre-treatment 2D dose prediction model
NASA Astrophysics Data System (ADS)
Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank
2018-02-01
Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.
Ethical sensitivity in professional practice: concept analysis.
Weaver, Kathryn; Morse, Janice; Mitcham, Carl
2008-06-01
This paper is a report of a concept analysis of ethical sensitivity. Ethical sensitivity enables nurses and other professionals to respond morally to the suffering and vulnerability of those receiving professional care and services. Because of its significance to nursing and other professional practices, ethical sensitivity deserves more focused analysis. A criteria-based method oriented toward pragmatic utility guided the analysis of 200 papers and books from the fields of nursing, medicine, psychology, dentistry, clinical ethics, theology, education, law, accounting or business, journalism, philosophy, political and social sciences and women's studies. This literature spanned 1970 to 2006 and was sorted by discipline and concept dimensions and examined for concept structure and use across various contexts. The analysis was completed in September 2007. Ethical sensitivity in professional practice develops in contexts of uncertainty, client suffering and vulnerability, and through relationships characterized by receptivity, responsiveness and courage on the part of professionals. Essential attributes of ethical sensitivity are identified as moral perception, affectivity and dividing loyalties. Outcomes include integrity preserving decision-making, comfort and well-being, learning and professional transcendence. Our findings promote ethical sensitivity as a type of practical wisdom that pursues client comfort and professional satisfaction with care delivery. The analysis and resulting model offers an inclusive view of ethical sensitivity that addresses some of the limitations with prior conceptualizations.
Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.
2001-01-01
This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Monte Carlo capabilities of the SCALE code system
Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...
2014-09-12
SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less
Cotton, Cary C; Erim, Daniel; Eluri, Swathi; Palmer, Sarah H; Green, Daniel J; Wolf, W Asher; Runge, Thomas M; Wheeler, Stephanie; Shaheen, Nicholas J; Dellon, Evan S
2017-06-01
Topical corticosteroids or dietary elimination are recommended as first-line therapies for eosinophilic esophagitis, but data to directly compare these therapies are scant. We performed a cost utility comparison of topical corticosteroids and the 6-food elimination diet (SFED) in treatment of eosinophilic esophagitis, from the payer perspective. We used a modified Markov model based on current clinical guidelines, in which transition between states depended on histologic response simulated at the individual cohort-member level. Simulation parameters were defined by systematic review and meta-analysis to determine the base-case estimates and bounds of uncertainty for sensitivity analysis. Meta-regression models included adjustment for differences in study and cohort characteristics. In the base-case scenario, topical fluticasone was about as effective as SFED but more expensive at a 5-year time horizon ($9261.58 vs $5719.72 per person). SFED was more effective and less expensive than topical fluticasone and topical budesonide in the base-case scenario. Probabilistic sensitivity analysis revealed little uncertainty in relative treatment effectiveness. There was somewhat greater uncertainty in the relative cost of treatments; most simulations found SFED to be less expensive. In a cost utility analysis comparing topical corticosteroids and SFED for first-line treatment of eosinophilic esophagitis, the therapies were similar in effectiveness. SFED was on average less expensive, and more cost effective in most simulations, than topical budesonide and topical fluticasone, from a payer perspective and not accounting for patient-level costs or quality of life. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Sayer, A. M.; Hsu, N. C.; Bettenhausen, C.; Lee, J.; Redemann, J.; Schmid, B.; Shinozuka, Y.
2016-01-01
Cases of absorbing aerosols above clouds (AACs), such as smoke or mineral dust, are omitted from most routinely processed space-based aerosol optical depth (AOD) data products, including those from the Moderate Resolution Imaging Spectroradiometer (MODIS). This study presents a sensitivity analysis and preliminary algorithm to retrieve above-cloud AOD and liquid cloud optical depth (COD) for AAC cases from MODIS or similar sensors, for incorporation into a future version of the "Deep Blue" AOD data product. Detailed retrieval simulations suggest that these sensors should be able to determine AAC AOD with a typical level of uncertainty approximately 25-50 percent (with lower uncertainties for more strongly absorbing aerosol types) and COD with an uncertainty approximately10-20 percent, if an appropriate aerosol optical model is known beforehand. Errors are larger, particularly if the aerosols are only weakly absorbing, if the aerosol optical properties are not known, and the appropriate model to use must also be retrieved. Actual retrieval errors are also compared to uncertainty envelopes obtained through the optimal estimation (OE) technique; OE-based uncertainties are found to be generally reasonable for COD but larger than actual retrieval errors for AOD, due in part to difficulties in quantifying the degree of spectral correlation of forward model error. The algorithm is also applied to two MODIS scenes (one smoke and one dust) for which near-coincident NASA Ames Airborne Tracking Sun photometer (AATS) data were available to use as a ground truth AOD data source, and found to be in good agreement, demonstrating the validity of the technique with real observations.
Assessing model sensitivity and uncertainty across multiple Free-Air CO2 Enrichment experiments.
NASA Astrophysics Data System (ADS)
Cowdery, E.; Dietze, M.
2015-12-01
As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentrations are highly variable and contain a considerable amount of uncertainty. It is necessary that we understand which factors are driving this uncertainty. The Free-Air CO2 Enrichment (FACE) experiments have equipped us with a rich data source that can be used to calibrate and validate these model predictions. To identify and evaluate the assumptions causing inter-model differences we performed model sensitivity and uncertainty analysis across ambient and elevated CO2 treatments using the Data Assimilation Linked Ecosystem Carbon (DALEC) model and the Ecosystem Demography Model (ED2), two process-based models ranging from low to high complexity respectively. These modeled process responses were compared to experimental data from the Kennedy Space Center Open Top Chamber Experiment, the Nevada Desert Free Air CO2 Enrichment Facility, the Rhinelander FACE experiment, the Wyoming Prairie Heating and CO2 Enrichment Experiment, the Duke Forest Face experiment and the Oak Ridge Experiment on CO2 Enrichment. By leveraging data access proxy and data tilling services provided by the BrownDog data curation project alongside analysis modules available in the Predictive Ecosystem Analyzer (PEcAn), we produced automated, repeatable benchmarking workflows that are generalized to incorporate different sites and ecological models. Combining the observed patterns of uncertainty between the two models with results of the recent FACE-model data synthesis project (FACE-MDS) can help identify which processes need further study and additional data constraints. These findings can be used to inform future experimental design and in turn can provide informative starting point for data assimilation.
NASA Astrophysics Data System (ADS)
Chen, Cheng; Xu, Weijie; Guo, Tong; Chen, Kai
2017-10-01
Uncertainties in structure properties can result in different responses in hybrid simulations. Quantification of the effect of these uncertainties would enable researchers to estimate the variances of structural responses observed from experiments. This poses challenges for real-time hybrid simulation (RTHS) due to the existence of actuator delay. Polynomial chaos expansion (PCE) projects the model outputs on a basis of orthogonal stochastic polynomials to account for influences of model uncertainties. In this paper, PCE is utilized to evaluate effect of actuator delay on the maximum displacement from real-time hybrid simulation of a single degree of freedom (SDOF) structure when accounting for uncertainties in structural properties. The PCE is first applied for RTHS without delay to determine the order of PCE, the number of sample points as well as the method for coefficients calculation. The PCE is then applied to RTHS with actuator delay. The mean, variance and Sobol indices are compared and discussed to evaluate the effects of actuator delay on uncertainty quantification for RTHS. Results show that the mean and the variance of the maximum displacement increase linearly and exponentially with respect to actuator delay, respectively. Sensitivity analysis through Sobol indices also indicates the influence of the single random variable decreases while the coupling effect increases with the increase of actuator delay.
Inventory Uncertainty Quantification using TENDL Covariance Data in Fispact-II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eastwood, J.W.; Morgan, J.G.; Sublet, J.-Ch., E-mail: jean-christophe.sublet@ccfe.ac.uk
2015-01-15
The new inventory code Fispact-II provides predictions of inventory, radiological quantities and their uncertainties using nuclear data covariance information. Central to the method is a novel fast pathways search algorithm using directed graphs. The pathways output provides (1) an aid to identifying important reactions, (2) fast estimates of uncertainties, (3) reduced models that retain important nuclides and reactions for use in the code's Monte Carlo sensitivity analysis module. Described are the methods that are being implemented for improving uncertainty predictions, quantification and propagation using the covariance data that the recent nuclear data libraries contain. In the TENDL library, above themore » upper energy of the resolved resonance range, a Monte Carlo method in which the covariance data come from uncertainties of the nuclear model calculations is used. The nuclear data files are read directly by FISPACT-II without any further intermediate processing. Variance and covariance data are processed and used by FISPACT-II to compute uncertainties in collapsed cross sections, and these are in turn used to predict uncertainties in inventories and all derived radiological data.« less
Feizizadeh, Bakhtiar; Blaschke, Thomas
2014-01-01
GIS-based multicriteria decision analysis (MCDA) methods are increasingly being used in landslide susceptibility mapping. However, the uncertainties that are associated with MCDA techniques may significantly impact the results. This may sometimes lead to inaccurate outcomes and undesirable consequences. This article introduces a new GIS-based MCDA approach. We illustrate the consequences of applying different MCDA methods within a decision-making process through uncertainty analysis. Three GIS-MCDA methods in conjunction with Monte Carlo simulation (MCS) and Dempster–Shafer theory are analyzed for landslide susceptibility mapping (LSM) in the Urmia lake basin in Iran, which is highly susceptible to landslide hazards. The methodology comprises three stages. First, the LSM criteria are ranked and a sensitivity analysis is implemented to simulate error propagation based on the MCS. The resulting weights are expressed through probability density functions. Accordingly, within the second stage, three MCDA methods, namely analytical hierarchy process (AHP), weighted linear combination (WLC) and ordered weighted average (OWA), are used to produce the landslide susceptibility maps. In the third stage, accuracy assessments are carried out and the uncertainties of the different results are measured. We compare the accuracies of the three MCDA methods based on (1) the Dempster–Shafer theory and (2) a validation of the results using an inventory of known landslides and their respective coverage based on object-based image analysis of IRS-ID satellite images. The results of this study reveal that through the integration of GIS and MCDA models, it is possible to identify strategies for choosing an appropriate method for LSM. Furthermore, our findings indicate that the integration of MCDA and MCS can significantly improve the accuracy of the results. In LSM, the AHP method performed best, while the OWA reveals better performance in the reliability assessment. The WLC operation yielded poor results. PMID:27019609
Uncertainties in Earthquake Loss Analysis: A Case Study From Southern California
NASA Astrophysics Data System (ADS)
Mahdyiar, M.; Guin, J.
2005-12-01
Probabilistic earthquake hazard and loss analyses play important roles in many areas of risk management, including earthquake related public policy and insurance ratemaking. Rigorous loss estimation for portfolios of properties is difficult since there are various types of uncertainties in all aspects of modeling and analysis. It is the objective of this study to investigate the sensitivity of earthquake loss estimation to uncertainties in regional seismicity, earthquake source parameters, ground motions, and sites' spatial correlation on typical property portfolios in Southern California. Southern California is an attractive region for such a study because it has a large population concentration exposed to significant levels of seismic hazard. During the last decade, there have been several comprehensive studies of most regional faults and seismogenic sources. There have also been detailed studies on regional ground motion attenuations and regional and local site responses to ground motions. This information has been used by engineering seismologists to conduct regional seismic hazard and risk analysis on a routine basis. However, one of the more difficult tasks in such studies is the proper incorporation of uncertainties in the analysis. From the hazard side, there are uncertainties in the magnitudes, rates and mechanisms of the seismic sources and local site conditions and ground motion site amplifications. From the vulnerability side, there are considerable uncertainties in estimating the state of damage of buildings under different earthquake ground motions. From an analytical side, there are challenges in capturing the spatial correlation of ground motions and building damage, and integrating thousands of loss distribution curves with different degrees of correlation. In this paper we propose to address some of these issues by conducting loss analyses of a typical small portfolio in southern California, taking into consideration various source and ground motion uncertainties. The approach is designed to integrate loss distribution functions with different degrees of correlation for portfolio analysis. The analysis is based on USGS 2002 regional seismicity model.
Direct Aerosol Forcing Uncertainty
Mccomiskey, Allison
2008-01-15
Understanding sources of uncertainty in aerosol direct radiative forcing (DRF), the difference in a given radiative flux component with and without aerosol, is essential to quantifying changes in Earth's radiation budget. We examine the uncertainty in DRF due to measurement uncertainty in the quantities on which it depends: aerosol optical depth, single scattering albedo, asymmetry parameter, solar geometry, and surface albedo. Direct radiative forcing at the top of the atmosphere and at the surface as well as sensitivities, the changes in DRF in response to unit changes in individual aerosol or surface properties, are calculated at three locations representing distinct aerosol types and radiative environments. The uncertainty in DRF associated with a given property is computed as the product of the sensitivity and typical measurement uncertainty in the respective aerosol or surface property. Sensitivity and uncertainty values permit estimation of total uncertainty in calculated DRF and identification of properties that most limit accuracy in estimating forcing. Total uncertainties in modeled local diurnally averaged forcing range from 0.2 to 1.3 W m-2 (42 to 20%) depending on location (from tropical to polar sites), solar zenith angle, surface reflectance, aerosol type, and aerosol optical depth. The largest contributor to total uncertainty in DRF is usually single scattering albedo; however decreasing measurement uncertainties for any property would increase accuracy in DRF. Comparison of two radiative transfer models suggests the contribution of modeling error is small compared to the total uncertainty although comparable to uncertainty arising from some individual properties.
NASA Astrophysics Data System (ADS)
Blanco, E. L.; Lund, M.; Williams, M. D.; Christensen, T. R.; Tamstorf, M. P.
2015-12-01
An improvement in our process-based understanding of CO2 exchanges in the Arctic, and their climate sensitivity, is critical for examining the role of tundra ecosystems in changing climates. Arctic organic carbon storage has seen increased attention in recent years due to large potential for carbon releases following thaw. Our knowledge about the exact scale and sensitivity for a phase-change of these C stocks are, however, limited. Minor variations in Gross Primary Production (GPP) and Ecosystem Respiration (Reco) driven by changes in the climate can lead to either C sink or C source states, which likely will impact the overall C cycle of the ecosystem. Eddy covariance data is usually used to partition Net Ecosystem Exchange (NEE) into GPP and Reco achieved by flux separation algorithms. However, different partitioning approaches lead to different estimates. as well as undefined uncertainties. The main objectives of this study are to use model-data fusion approaches to (1) determine the inter-annual variability in C source/sink strength for an Arctic fen, and attribute such variations to GPP vs Reco, (2) investigate the climate sensitivity of these processes and (3) explore the uncertainties in NEE partitioning. The intention is to elaborate on the information gathered in an existing catchment area under an extensive cross-disciplinary ecological monitoring program in low Arctic West Greenland, established under the auspices of the Greenland Ecosystem Monitoring (GEM) program. The use of such a thorough long-term (7 years) dataset applied to the exploration in inter-annual variability of carbon exchange, related driving factors and NEE partition uncertainties provides a novel input into our understanding about land-atmosphere CO2 exchange.
Stenemo, Fredrik; Jarvis, Nicholas
2007-09-01
A simulation tool for site-specific vulnerability assessments of pesticide leaching to groundwater was developed, based on the pesticide fate and transport model MACRO, parameterized using pedotransfer functions and reasonable worst-case parameter values. The effects of uncertainty in the pedotransfer functions on simulation results were examined for 48 combinations of soils, pesticides and application timings, by sampling pedotransfer function regression errors and propagating them through the simulation model in a Monte Carlo analysis. An uncertainty factor, f(u), was derived, defined as the ratio between the concentration simulated with no errors, c(sim), and the 80th percentile concentration for the scenario. The pedotransfer function errors caused a large variation in simulation results, with f(u) ranging from 1.14 to 1440, with a median of 2.8. A non-linear relationship was found between f(u) and c(sim), which can be used to account for parameter uncertainty by correcting the simulated concentration, c(sim), to an estimated 80th percentile value. For fine-textured soils, the predictions were most sensitive to errors in the pedotransfer functions for two parameters regulating macropore flow (the saturated matrix hydraulic conductivity, K(b), and the effective diffusion pathlength, d) and two water retention function parameters (van Genuchten's N and alpha parameters). For coarse-textured soils, the model was also sensitive to errors in the exponent in the degradation water response function and the dispersivity, in addition to K(b), but showed little sensitivity to d. To reduce uncertainty in model predictions, improved pedotransfer functions for K(b), d, N and alpha would therefore be most useful. 2007 Society of Chemical Industry
Sensitivity analysis of radionuclides atmospheric dispersion following the Fukushima accident
NASA Astrophysics Data System (ADS)
Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien
2014-05-01
Atmospheric dispersion models are used in response to accidental releases with two purposes: - minimising the population exposure during the accident; - complementing field measurements for the assessment of short and long term environmental and sanitary impacts. The predictions of these models are subject to considerable uncertainties of various origins. Notably, input data, such as meteorological fields or estimations of emitted quantities as function of time, are highly uncertain. The case studied here is the atmospheric release of radionuclides following the Fukushima Daiichi disaster. The model used in this study is Polyphemus/Polair3D, from which derives IRSN's operational long distance atmospheric dispersion model ldX. A sensitivity analysis was conducted in order to estimate the relative importance of a set of identified uncertainty sources. The complexity of this task was increased by four characteristics shared by most environmental models: - high dimensional inputs; - correlated inputs or inputs with complex structures; - high dimensional output; - multiplicity of purposes that require sophisticated and non-systematic post-processing of the output. The sensitivities of a set of outputs were estimated with the Morris screening method. The input ranking was highly dependent on the considered output. Yet, a few variables, such as horizontal diffusion coefficient or clouds thickness, were found to have a weak influence on most of them and could be discarded from further studies. The sensitivity analysis procedure was also applied to indicators of the model performance computed on a set of gamma dose rates observations. This original approach is of particular interest since observations could be used later to calibrate the input variables probability distributions. Indeed, only the variables that are influential on performance scores are likely to allow for calibration. An indicator based on emission peaks time matching was elaborated in order to complement classical statistical scores which were dominated by deposit dose rates and almost insensitive to lower atmosphere dose rates. The substantial sensitivity of these performance indicators is auspicious for future calibration attempts and indicates that the simple perturbations used here may be sufficient to represent an essential part of the overall uncertainty.
NASA Astrophysics Data System (ADS)
Li, Yi; Xu, Yanlong
2017-09-01
Considering uncertain geometrical and material parameters, the lower and upper bounds of the band gap of an undulated beam with periodically arched shape are studied by the Monte Carlo Simulation (MCS) and interval analysis based on the Taylor series. Given the random variations of the overall uncertain variables, scatter plots from the MCS are used to analyze the qualitative sensitivities of the band gap respect to these uncertainties. We find that the influence of uncertainty of the geometrical parameter on the band gap of the undulated beam is stronger than that of the material parameter. And this conclusion is also proved by the interval analysis based on the Taylor series. Our methodology can give a strategy to reduce the errors between the design and practical values of the band gaps by improving the accuracy of the specially selected uncertain design variables of the periodical structures.
Development of probabilistic emission inventories of air toxics for Jacksonville, Florida, USA.
Zhao, Yuchao; Frey, H Christopher
2004-11-01
Probabilistic emission inventories were developed for 1,3-butadiene, mercury (Hg), arsenic (As), benzene, formaldehyde, and lead for Jacksonville, FL. To quantify inter-unit variability in empirical emission factor data, the Maximum Likelihood Estimation (MLE) method or the Method of Matching Moments was used to fit parametric distributions. For data sets that contain nondetected measurements, a method based upon MLE was used for parameter estimation. To quantify the uncertainty in urban air toxic emission factors, parametric bootstrap simulation and empirical bootstrap simulation were applied to uncensored and censored data, respectively. The probabilistic emission inventories were developed based on the product of the uncertainties in the emission factors and in the activity factors. The uncertainties in the urban air toxics emission inventories range from as small as -25 to +30% for Hg to as large as -83 to +243% for As. The key sources of uncertainty in the emission inventory for each toxic are identified based upon sensitivity analysis. Typically, uncertainty in the inventory of a given pollutant can be attributed primarily to a small number of source categories. Priorities for improving the inventories and for refining the probabilistic analysis are discussed.
Understanding earth system models: how Global Sensitivity Analysis can help
NASA Astrophysics Data System (ADS)
Pianosi, Francesca; Wagener, Thorsten
2017-04-01
Computer models are an essential element of earth system sciences, underpinning our understanding of systems functioning and influencing the planning and management of socio-economic-environmental systems. Even when these models represent a relatively low number of physical processes and variables, earth system models can exhibit a complicated behaviour because of the high level of interactions between their simulated variables. As the level of these interactions increases, we quickly lose the ability to anticipate and interpret the model's behaviour and hence the opportunity to check whether the model gives the right response for the right reasons. Moreover, even if internally consistent, an earth system model will always produce uncertain predictions because it is often forced by uncertain inputs (due to measurement errors, pre-processing uncertainties, scarcity of measurements, etc.). Lack of transparency about the scope of validity, limitations and the main sources of uncertainty of earth system models can be a strong limitation to their effective use for both scientific and decision-making purposes. Global Sensitivity Analysis (GSA) is a set of statistical analysis techniques to investigate the complex behaviour of earth system models in a structured, transparent and comprehensive way. In this presentation, we will use a range of examples across earth system sciences (with a focus on hydrology) to demonstrate how GSA is a fundamental element in advancing the construction and use of earth system models, including: verifying the consistency of the model's behaviour with our conceptual understanding of the system functioning; identifying the main sources of output uncertainty so to focus efforts for uncertainty reduction; finding tipping points in forcing inputs that, if crossed, would bring the system to specific conditions we want to avoid.
NASA Astrophysics Data System (ADS)
Jie, M.; Zhang, J.; Guo, B. B.
2017-12-01
As a typical distributed hydrological model, the SWAT model also has a challenge in calibrating parameters and analysis their uncertainty. This paper chooses the Chaohe River Basin China as the study area, through the establishment of the SWAT model, loading the DEM data of the Chaohe river basin, the watershed is automatically divided into several sub-basins. Analyzing the land use, soil and slope which are on the basis of the sub-basins and calculating the hydrological response unit (HRU) of the study area, after running SWAT model, the runoff simulation values in the watershed are obtained. On this basis, using weather data, known daily runoff of three hydrological stations, combined with the SWAT-CUP automatic program and the manual adjustment method are used to analyze the multi-site calibration of the model parameters. Furthermore, the GLUE algorithm is used to analyze the parameters uncertainty of the SWAT model. Through the sensitivity analysis, calibration and uncertainty study of SWAT, the results indicate that the parameterization of the hydrological characteristics of the Chaohe river is successful and feasible which can be used to simulate the Chaohe river basin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldstein, Peter
2014-01-24
This report describes the sensitivity of predicted nuclear fallout to a variety of model input parameters, including yield, height of burst, particle and activity size distribution parameters, wind speed, wind direction, topography, and precipitation. We investigate sensitivity over a wide but plausible range of model input parameters. In addition, we investigate a specific example with a relatively narrow range to illustrate the potential for evaluating uncertainties in predictions when there are more precise constraints on model parameters.
Climate data induced uncertainty in model-based estimations of terrestrial primary productivity
NASA Astrophysics Data System (ADS)
Wu, Zhendong; Ahlström, Anders; Smith, Benjamin; Ardö, Jonas; Eklundh, Lars; Fensholt, Rasmus; Lehsten, Veiko
2017-06-01
Model-based estimations of historical fluxes and pools of the terrestrial biosphere differ substantially. These differences arise not only from differences between models but also from differences in the environmental and climatic data used as input to the models. Here we investigate the role of uncertainties in historical climate data by performing simulations of terrestrial gross primary productivity (GPP) using a process-based dynamic vegetation model (LPJ-GUESS) forced by six different climate datasets. We find that the climate induced uncertainty, defined as the range among historical simulations in GPP when forcing the model with the different climate datasets, can be as high as 11 Pg C yr-1 globally (9% of mean GPP). We also assessed a hypothetical maximum climate data induced uncertainty by combining climate variables from different datasets, which resulted in significantly larger uncertainties of 41 Pg C yr-1 globally or 32% of mean GPP. The uncertainty is partitioned into components associated to the three main climatic drivers, temperature, precipitation, and shortwave radiation. Additionally, we illustrate how the uncertainty due to a given climate driver depends both on the magnitude of the forcing data uncertainty (climate data range) and the apparent sensitivity of the modeled GPP to the driver (apparent model sensitivity). We find that LPJ-GUESS overestimates GPP compared to empirically based GPP data product in all land cover classes except for tropical forests. Tropical forests emerge as a disproportionate source of uncertainty in GPP estimation both in the simulations and empirical data products. The tropical forest uncertainty is most strongly associated with shortwave radiation and precipitation forcing, of which climate data range contributes higher to overall uncertainty than apparent model sensitivity to forcing. Globally, precipitation dominates the climate induced uncertainty over nearly half of the vegetated land area, which is mainly due to climate data range and less so due to the apparent model sensitivity. Overall, climate data ranges are found to contribute more to the climate induced uncertainty than apparent model sensitivity to forcing. Our study highlights the need to better constrain tropical climate, and demonstrates that uncertainty caused by climatic forcing data must be considered when comparing and evaluating carbon cycle model results and empirical datasets.
Climate Risk Assessment: Technical Guidance Manual for DoD Installations and Built Environment
2016-09-06
climate change risks to DoD installations and the built environment. The approach, which we call “decision-scaling,” reveals the core sensitivity of...DoD installations to climate change . It is designed to illuminate the sensitivity of installations and their supporting infrastructure systems...including water and energy, to climate changes and other uncertainties without dependence on climate change projections. In this way the analysis and
SU-E-T-429: Uncertainties of Cell Surviving Fractions Derived From Tumor-Volume Variation Curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chvetsov, A
2014-06-01
Purpose: To evaluate uncertainties of cell surviving fraction reconstructed from tumor-volume variation curves during radiation therapy using sensitivity analysis based on linear perturbation theory. Methods: The time dependent tumor-volume functions V(t) have been calculated using a twolevel cell population model which is based on the separation of entire tumor cell population in two subpopulations: oxygenated viable and lethally damaged cells. The sensitivity function is defined as S(t)=[δV(t)/V(t)]/[δx/x] where δV(t)/V(t) is the time dependent relative variation of the volume V(t) and δx/x is the relative variation of the radiobiological parameter x. The sensitivity analysis was performed using direct perturbation method wheremore » the radiobiological parameter x was changed by a certain error and the tumor-volume was recalculated to evaluate the corresponding tumor-volume variation. Tumor volume variation curves and sensitivity functions have been computed for different values of cell surviving fractions from the practically important interval S{sub 2}=0.1-0.7 using the two-level cell population model. Results: The sensitivity functions of tumor-volume to cell surviving fractions achieved a relatively large value of 2.7 for S{sub 2}=0.7 and then approached zero as S{sub 2} is approaching zero Assuming a systematic error of 3-4% we obtain that the relative error in S{sub 2} is less that 20% in the range S2=0.4-0.7. This Resultis important because the large values of S{sub 2} are associated with poor treatment outcome should be measured with relatively small uncertainties. For the very small values of S2<0.3, the relative error can be larger than 20%; however, the absolute error does not increase significantly. Conclusion: Tumor-volume curves measured during radiotherapy can be used for evaluation of cell surviving fractions usually observed in radiation therapy with conventional fractionation.« less
Uncertainty Categorization, Modeling, and Management for Regional Water Supply Planning
NASA Astrophysics Data System (ADS)
Fletcher, S.; Strzepek, K. M.; AlSaati, A.; Alhassan, A.
2016-12-01
Many water planners face increased pressure on water supply systems from growing demands, variability in supply and a changing climate. Short-term variation in water availability and demand; long-term uncertainty in climate, groundwater storage, and sectoral competition for water; and varying stakeholder perspectives on the impacts of water shortages make it difficult to assess the necessity of expensive infrastructure investments. We categorize these uncertainties on two dimensions: whether they are the result of stochastic variation or epistemic uncertainty, and whether the uncertainties can be described probabilistically or are deep uncertainties whose likelihood is unknown. We develop a decision framework that combines simulation for probabilistic uncertainty, sensitivity analysis for deep uncertainty and Bayesian decision analysis for uncertainties that are reduced over time with additional information. We apply this framework to two contrasting case studies - drought preparedness in Melbourne, Australia and fossil groundwater depletion in Riyadh, Saudi Arabia - to assess the impacts of different types of uncertainty on infrastructure decisions. Melbourne's water supply system relies on surface water, which is impacted by natural variation in rainfall, and a market-based system for managing water rights. Our results show that small, flexible investment increases can mitigate shortage risk considerably at reduced cost. Riyadh, by contrast, relies primarily on desalination for municipal use and fossil groundwater for agriculture, and a centralized planner makes allocation decisions. Poor regional groundwater measurement makes it difficult to know when groundwater pumping will become uneconomical, resulting in epistemic uncertainty. However, collecting more data can reduce the uncertainty, suggesting the need for different uncertainty modeling and management strategies in Riyadh than in Melbourne. We will categorize the two systems and propose appropriate decision making under uncertainty methods from the state of the art. We will compare the efficiency of alternative approaches to the two case studies. Finally, we will present a hybrid decision analytic tool to address the synthesis of uncertainties.
NASA Astrophysics Data System (ADS)
Tang, Zhongqian; Zhang, Hua; Yi, Shanzhen; Xiao, Yangfan
2018-03-01
GIS-based multi-criteria decision analysis (MCDA) is increasingly used to support flood risk assessment. However, conventional GIS-MCDA methods fail to adequately represent spatial variability and are accompanied with considerable uncertainty. It is, thus, important to incorporate spatial variability and uncertainty into GIS-based decision analysis procedures. This research develops a spatially explicit, probabilistic GIS-MCDA approach for the delineation of potentially flood susceptible areas. The approach integrates the probabilistic and the local ordered weighted averaging (OWA) methods via Monte Carlo simulation, to take into account the uncertainty related to criteria weights, spatial heterogeneity of preferences and the risk attitude of the analyst. The approach is applied to a pilot study for the Gucheng County, central China, heavily affected by the hazardous 2012 flood. A GIS database of six geomorphological and hydrometeorological factors for the evaluation of susceptibility was created. Moreover, uncertainty and sensitivity analysis were performed to investigate the robustness of the model. The results indicate that the ensemble method improves the robustness of the model outcomes with respect to variation in criteria weights and identifies which criteria weights are most responsible for the variability of model outcomes. Therefore, the proposed approach is an improvement over the conventional deterministic method and can provides a more rational, objective and unbiased tool for flood susceptibility evaluation.
Effects of Buffer Size and Shape on Associations between the Built Environment and Energy Balance
Berrigan, David; Hart, Jaime E.; Hipp, J. Aaron; Hoehner, Christine M.; Kerr, Jacqueline; Major, Jacqueline M.; Oka, Masayoshi; Laden, Francine
2014-01-01
Uncertainty in the relevant spatial context may drive heterogeneity in findings on the built environment and energy balance. To estimate the effect of this uncertainty, we conducted a sensitivity analysis defining intersection and business densities and counts within different buffer sizes and shapes on associations with self-reported walking and body mass index. Linear regression results indicated that the scale and shape of buffers influenced study results and may partly explain the inconsistent findings in the built environment and energy balance literature. PMID:24607875
Uncertainty quantification for PZT bimorph actuators
NASA Astrophysics Data System (ADS)
Bravo, Nikolas; Smith, Ralph C.; Crews, John
2018-03-01
In this paper, we discuss the development of a high fidelity model for a PZT bimorph actuator used for micro-air vehicles, which includes the Robobee. We developed a high-fidelity model for the actuator using the homogenized energy model (HEM) framework, which quantifies the nonlinear, hysteretic, and rate-dependent behavior inherent to PZT in dynamic operating regimes. We then discussed an inverse problem on the model. We included local and global sensitivity analysis of the parameters in the high-fidelity model. Finally, we will discuss the results of Bayesian inference and uncertainty quantification on the HEM.
The short-term impact of economic uncertainty on motor vehicle collisions.
Vandoros, Sotiris; Avendano, Mauricio; Kawachi, Ichiro
2018-06-01
Stress and anxiety lead to attention loss and sleep deprivation and may reduce driving performance, increasing the risk of motor vehicle collision. We used evidence from a natural experiment to examine whether daily changes in economic uncertainty, potentially leading to attention or sleep loss, are associated with collisions in Great Britain. Daily data from the economic policy uncertainty index, derived from analysis of daily UK newspapers, were linked to the daily number of motor vehicle collisions in Great Britain over the period 2005-2015, obtained from the Department for Transport. Exploiting daily variations in economic uncertainty, we used a GARCH approach to model daily rates of motor vehicle collisions as a function of economic uncertainty, controlling for month and day of the week, monthly unemployment rates and weekly unleaded petrol prices. A spike in the daily economic uncertainty index was associated with an immediate increase in the number of motor vehicle collisions. Results were robust to various sensitivity analyses. Overall, daily increases in economic uncertainty are associated with short-term spikes in motor vehicle collisions. Preventive and traffic control measures may need to increase during periods of economic uncertainty. Copyright © 2018 Elsevier Inc. All rights reserved.
The future viability of algae-derived biodiesel under economic and technical uncertainties.
Brownbridge, George; Azadi, Pooya; Smallbone, Andrew; Bhave, Amit; Taylor, Benjamin; Kraft, Markus
2014-01-01
This study presents a techno-economic assessment of algae-derived biodiesel under economic and technical uncertainties associated with the development of algal biorefineries. A global sensitivity analysis was performed using a High Dimensional Model Representation (HDMR) method. It was found that, considering reasonable ranges over which each parameter can vary, the sensitivity of the biodiesel production cost to the key input parameters decreases in the following order: algae oil content>algae annual productivity per unit area>plant production capacity>carbon price increase rate. It was also found that the Return on Investment (ROI) is highly sensitive to the algae oil content, and to a lesser extent to the algae annual productivity, crude oil price and price increase rate, plant production capacity, and carbon price increase rate. For a large scale plant (100,000 tonnes of biodiesel per year) the production cost of biodiesel is likely to be £0.8-1.6 per kg. Copyright © 2013 Elsevier Ltd. All rights reserved.
Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality
Hondula, David M.; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer
2017-01-01
Background: Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to “adaptation uncertainty” (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. Objectives: This study had three aims: a) Compare the range in projected impacts that arises from using different adaptation modeling methods; b) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c) recommend modeling method(s) to use in future impact assessments. Methods: We estimated impacts for 2070–2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. Results: The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Conclusions: Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634 PMID:28885979
Statistical sensitivity analysis of a simple nuclear waste repository model
NASA Astrophysics Data System (ADS)
Ronen, Y.; Lucius, J. L.; Blow, E. M.
1980-06-01
A preliminary step in a comprehensive sensitivity analysis of the modeling of a nuclear waste repository. The purpose of the complete analysis is to determine which modeling parameters and physical data are most important in determining key design performance criteria and then to obtain the uncertainty in the design for safety considerations. The theory for a statistical screening design methodology is developed for later use in the overall program. The theory was applied to the test case of determining the relative importance of the sensitivity of near field temperature distribution in a single level salt repository to modeling parameters. The exact values of the sensitivities to these physical and modeling parameters were then obtained using direct methods of recalculation. The sensitivity coefficients found to be important for the sample problem were thermal loading, distance between the spent fuel canisters and their radius. Other important parameters were those related to salt properties at a point of interest in the repository.
Characterizing Epistemic Uncertainty for Launch Vehicle Designs
NASA Technical Reports Server (NTRS)
Novack, Steven D.; Rogers, Jim; Al Hassan, Mohammad; Hark, Frank
2016-01-01
NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk, and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results, and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods, such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty, are rendered obsolete, since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods. This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper describes how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.
Characterizing Epistemic Uncertainty for Launch Vehicle Designs
NASA Technical Reports Server (NTRS)
Novack, Steven D.; Rogers, Jim; Hark, Frank; Al Hassan, Mohammad
2016-01-01
NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty are rendered obsolete since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods.This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper shows how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.
Multiscale Modeling and Uncertainty Quantification for Nuclear Fuel Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estep, Donald; El-Azab, Anter; Pernice, Michael
2017-03-23
In this project, we will address the challenges associated with constructing high fidelity multiscale models of nuclear fuel performance. We (*) propose a novel approach for coupling mesoscale and macroscale models, (*) devise efficient numerical methods for simulating the coupled system, and (*) devise and analyze effective numerical approaches for error and uncertainty quantification for the coupled multiscale system. As an integral part of the project, we will carry out analysis of the effects of upscaling and downscaling, investigate efficient methods for stochastic sensitivity analysis of the individual macroscale and mesoscale models, and carry out a posteriori error analysis formore » computed results. We will pursue development and implementation of solutions in software used at Idaho National Laboratories on models of interest to the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program.« less
Ciffroy, P; Alfonso, B; Altenpohl, A; Banjac, Z; Bierkens, J; Brochot, C; Critto, A; De Wilde, T; Fait, G; Fierens, T; Garratt, J; Giubilato, E; Grange, E; Johansson, E; Radomyski, A; Reschwann, K; Suciu, N; Tanaka, T; Tediosi, A; Van Holderbeke, M; Verdonck, F
2016-10-15
MERLIN-Expo is a library of models that was developed in the frame of the FP7 EU project 4FUN in order to provide an integrated assessment tool for state-of-the-art exposure assessment for environment, biota and humans, allowing the detection of scientific uncertainties at each step of the exposure process. This paper describes the main features of the MERLIN-Expo tool. The main challenges in exposure modelling that MERLIN-Expo has tackled are: (i) the integration of multimedia (MM) models simulating the fate of chemicals in environmental media, and of physiologically based pharmacokinetic (PBPK) models simulating the fate of chemicals in human body. MERLIN-Expo thus allows the determination of internal effective chemical concentrations; (ii) the incorporation of a set of functionalities for uncertainty/sensitivity analysis, from screening to variance-based approaches. The availability of such tools for uncertainty and sensitivity analysis aimed to facilitate the incorporation of such issues in future decision making; (iii) the integration of human and wildlife biota targets with common fate modelling in the environment. MERLIN-Expo is composed of a library of fate models dedicated to non biological receptor media (surface waters, soils, outdoor air), biological media of concern for humans (several cultivated crops, mammals, milk, fish), as well as wildlife biota (primary producers in rivers, invertebrates, fish) and humans. These models can be linked together to create flexible scenarios relevant for both human and wildlife biota exposure. Standardized documentation for each model and training material were prepared to support an accurate use of the tool by end-users. One of the objectives of the 4FUN project was also to increase the confidence in the applicability of the MERLIN-Expo tool through targeted realistic case studies. In particular, we aimed at demonstrating the feasibility of building complex realistic exposure scenarios and the accuracy of the modelling predictions through a comparison with actual measurements. Copyright © 2016 Elsevier B.V. All rights reserved.
The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling
NASA Technical Reports Server (NTRS)
Dahl, Milo D.; Khavaran, Abbas
2010-01-01
Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.
Liu, Yunbo; Wear, Keith A.; Harris, Gerald R.
2017-01-01
Reliable acoustic characterization is fundamental for patient safety and clinical efficacy during high intensity therapeutic ultrasound (HITU) treatment. Technical challenges, such as measurement uncertainty and signal analysis still exist for HITU exposimetry using ultrasound hydrophones. In this work, four hydrophones were compared for pressure measurement: a robust needle hydrophone, a small PVDF capsule hydrophone and two different fiber-optic hydrophones. The focal waveform and beam distribution of a single element HITU transducer (1.05 MHz and 3.3 MHz) were evaluated. Complex deconvolution between the hydrophone voltage signal and frequency-dependent complex sensitivity was performed to obtain pressure waveform. Compressional pressure, rarefactional pressure, and focal beam distribution were compared up to 10.6/−6.0 MPa (p+ and p−) (1.05 MHz) and 20.65/−7.20 MPa (3.3 MHz). In particular, the effects of spatial averaging, local nonlinear distortion, complex deconvolution and hydrophone damage thresholds were investigated. This study showed an uncertainty of no better than 10–15% on hydrophone-based HITU pressure characterization. PMID:28735734
How often do sensitivity analyses for economic parameters change cost-utility analysis conclusions?
Schackman, Bruce R; Gold, Heather Taffet; Stone, Patricia W; Neumann, Peter J
2004-01-01
There is limited evidence about the extent to which sensitivity analysis has been used in the cost-effectiveness literature. Sensitivity analyses for health-related QOL (HR-QOL), cost and discount rate economic parameters are of particular interest because they measure the effects of methodological and estimation uncertainties. To investigate the use of sensitivity analyses in the pharmaceutical cost-utility literature in order to test whether a change in economic parameters could result in a different conclusion regarding the cost effectiveness of the intervention analysed. Cost-utility analyses of pharmaceuticals identified in a prior comprehensive audit (70 articles) were reviewed and further audited. For each base case for which sensitivity analyses were reported (n = 122), up to two sensitivity analyses for HR-QOL (n = 133), cost (n = 99), and discount rate (n = 128) were examined. Article mentions of thresholds for acceptable cost-utility ratios were recorded (total 36). Cost-utility ratios were denominated in US dollars for the year reported in each of the original articles in order to determine whether a different conclusion would have been indicated at the time the article was published. Quality ratings from the original audit for articles where sensitivity analysis results crossed the cost-utility ratio threshold above the base-case result were compared with those that did not. The most frequently mentioned cost-utility thresholds were $US20,000/QALY, $US50,000/QALY, and $US100,000/QALY. The proportions of sensitivity analyses reporting quantitative results that crossed the threshold above the base-case results (or where the sensitivity analysis result was dominated) were 31% for HR-QOL sensitivity analyses, 20% for cost-sensitivity analyses, and 15% for discount-rate sensitivity analyses. Almost half of the discount-rate sensitivity analyses did not report quantitative results. Articles that reported sensitivity analyses where results crossed the cost-utility threshold above the base-case results (n = 25) were of somewhat higher quality, and were more likely to justify their sensitivity analysis parameters, than those that did not (n = 45), but the overall quality rating was only moderate. Sensitivity analyses for economic parameters are widely reported and often identify whether choosing different assumptions leads to a different conclusion regarding cost effectiveness. Changes in HR-QOL and cost parameters should be used to test alternative guideline recommendations when there is uncertainty regarding these parameters. Changes in discount rates less frequently produce results that would change the conclusion about cost effectiveness. Improving the overall quality of published studies and describing the justifications for parameter ranges would allow more meaningful conclusions to be drawn from sensitivity analyses.
The Sensitivity of Regional Precipitation to Global Temperature Change and Forcings
NASA Astrophysics Data System (ADS)
Tebaldi, C.; O'Neill, B. C.; Lamarque, J. F.
2016-12-01
Global policies are most commonly formulated in terms of climate targets, like the much talked about 1.5° and 2°C warming thresholds identified as critical by the recent Paris agreements. But what does a target defined in terms of a globally averaged quantity mean in terms of expected regional changes? And, in particular, what should we expect in terms of significant changes in precipitation over specific regional domains for these and other incrementally different global goals? In this talk I will summarize the result of an analysis that aimed at characterizing the sensitivity of regional temperatures and precipitation amounts to changes in global average temperature. The analysis uses results from a multi-model ensemble (CMIP5), which allows us to address structural uncertainty in future projections, a type of uncertainty particularly relevant when considering precipitation changes. I will show what type of changes in global temperature and forcing levels bring about significant and pervasive changes in regional precipitation, contrasting its sensitivity to that of regional temperature changes. Because of the large internal variability of regional precipitation, I will show that significant changes in average regional precipitation can be detected only for fairly large separations (on the order of 2.5° or 3°C) in global average temperature levels, differently from the much higher sensitivity shown by regional temperatures.
Schijven, J F; Mülschlegel, J H C; Hassanizadeh, S M; Teunis, P F M; de Roda Husman, A M
2006-09-01
Protection zones of shallow unconfined aquifers in The Netherlands were calculated that allow protection against virus contamination to the level that the infection risk of 10(-4) per person per year is not exceeded with a 95% certainty. An uncertainty and a sensitivity analysis of the calculated protection zones were included. It was concluded that protection zones of 1 to 2 years travel time (206-418 m) are needed (6 to 12 times the currently applied travel time of 60 days). This will lead to enlargement of protection zones, encompassing 110 unconfined groundwater well systems that produce 3 x 10(8) m3 y(-1) of drinking water (38% of total Dutch production from groundwater). A smaller protection zone is possible if it can be shown that an aquifer has properties that lead to greater reduction of virus contamination, like more attachment. Deeper aquifers beneath aquitards of at least 2 years of vertical travel time are adequately protected because vertical flow in the aquitards is only 0.7 m per year. The most sensitive parameters are virus attachment and inactivation. The next most sensitive parameters are grain size of the sand, abstraction rate of groundwater, virus concentrations in raw sewage and consumption of unboiled drinking water. Research is recommended on additional protection by attachment and under unsaturated conditions.
Effects of nonspatial selective and divided visual attention on fMRI BOLD responses.
Weerda, Riklef; Vallines, Ignacio; Thomas, James P; Rutschmann, Roland M; Greenlee, Mark W
2006-09-01
Using an uncertainty paradigm and functional magnetic resonance imaging (fMRI) we studied the effect of nonspatial selective and divided visual attention on the activity of specific areas of human extrastriate visual cortex. The stimuli were single ovals that differed from an implicit standard oval in either colour or width. The subjects' task was to classify the current stimulus as one of two possible alternatives per stimulus dimension. Three different experimental conditions were conducted: "colour-certainty", "shape-certainty" and "uncertainty". In all experimental conditions, the stimulus differed in only one stimulus dimension per trial. In the two certainty conditions, the subjects knew in advance which dimension this would be. During the uncertainty condition they had no such previous knowledge and had to monitor both dimensions simultaneously. Statistical analysis of the fMRI data (with SPM2) revealed a modest effect of the attended stimulus dimension on the neural activity in colour sensitive area V4 (more activity during attention to colour) and in shape sensitive area LOC (more activity during attention to shape). Furthermore, cortical areas known to be related to attention and working memory processes (e.g., lateral prefrontal and posterior parietal cortex) exhibit higher activity during the condition of divided attention ("uncertainty") than during that of selective attention ("certainty").
NASA Astrophysics Data System (ADS)
Oswald, S. E.; Scheiffele, L. M.; Baroni, G.; Ingwersen, J.; Schrön, M.
2017-12-01
One application of Cosmic-Ray Neutron Sensing (CRNS) is to investigate soil moisture on agricultural fields during the crop season. This fully employs the non-invasive character of CRNS without interference with agricultural practices of the farmland. The changing influence of vegetation on CRNS has to be dealt with as well as spatio-temporal influences, e.g. by irrigation or harvest. Previous work revealed that the CRNS signal on farmland shows complex and non-unique response because of the hydrogen pools in different depths and distances. This creates a challenge for soil moisture estimation and subsequent use for irrigation management or hydrological modelling. Thus, a special aim of our study was to assess the uncertainty of CRNS in cropped fields and to identify underlying causes of uncertainty. We have applied CRNS at two field sites during the growing season that were accompanied by intensive measurements of soil moisture, vegetation parameters, and irrigation events. Sources of uncertainty were identified from the experimental data. A Monte Carlo approach was used to propagate these uncertainties to CRNS soil moisture estimations. In addition, a sensitivity analysis was performed to identify the most important factors explaining this uncertainty. Results showed that CRNS soil moisture compares well to the soil moisture network when the point values were converted to weighted water content with all hydrogen pools included. However, when considered as a stand-alone method to retrieve volumetric soil moisture, the performance decreased. The support volume including its penetration depth showed also a considerable uncertainty, especially in relatively dry soil moisture conditions. Of seven factors analyzed, actual soil moisture profile, bulk density, incoming neutron correction and calibrated parameter N0 were found to play an important role. One possible improvement could be a simple correction factor based on independent data of soil moisture profiles to better account for the sensitivity of the CRNS signal to the upper soil layers. This is an important step to improve the method for validation of remote sensing products or agricultural water management and establish CRNS as an applied monitoring tool on farmland.
NASA Astrophysics Data System (ADS)
Wang, Weizong; Berthelot, Antonin; Zhang, Quanzhi; Bogaerts, Annemie
2018-05-01
One of the main issues in plasma chemistry modeling is that the cross sections and rate coefficients are subject to uncertainties, which yields uncertainties in the modeling results and hence hinders the predictive capabilities. In this paper, we reveal the impact of these uncertainties on the model predictions of plasma-based dry reforming in a dielectric barrier discharge. For this purpose, we performed a detailed uncertainty analysis and sensitivity study. 2000 different combinations of rate coefficients, based on the uncertainty from a log-normal distribution, are used to predict the uncertainties in the model output. The uncertainties in the electron density and electron temperature are around 11% and 8% at the maximum of the power deposition for a 70% confidence level. Still, this can have a major effect on the electron impact rates and hence on the calculated conversions of CO2 and CH4, as well as on the selectivities of CO and H2. For the CO2 and CH4 conversion, we obtain uncertainties of 24% and 33%, respectively. For the CO and H2 selectivity, the corresponding uncertainties are 28% and 14%, respectively. We also identify which reactions contribute most to the uncertainty in the model predictions. In order to improve the accuracy and reliability of plasma chemistry models, we recommend using only verified rate coefficients, and we point out the need for dedicated verification experiments.
Combining uncertainty factors in deriving human exposure levels of noncarcinogenic toxicants.
Kodell, R L; Gaylor, D W
1999-01-01
Acceptable levels of human exposure to noncarcinogenic toxicants in environmental and occupational settings generally are derived by reducing experimental no-observed-adverse-effect levels (NOAELs) or benchmark doses (BDs) by a product of uncertainty factors (Barnes and Dourson, Ref. 1). These factors are presumed to ensure safety by accounting for uncertainty in dose extrapolation, uncertainty in duration extrapolation, differential sensitivity between humans and animals, and differential sensitivity among humans. The common default value for each uncertainty factor is 10. This paper shows how estimates of means and standard deviations of the approximately log-normal distributions of individual uncertainty factors can be used to estimate percentiles of the distribution of the product of uncertainty factors. An appropriately selected upper percentile, for example, 95th or 99th, of the distribution of the product can be used as a combined uncertainty factor to replace the conventional product of default factors.
NASA Astrophysics Data System (ADS)
Giambelluca, Thomas W.; Loague, Keith; Green, Richard E.; Nullet, Michael A.
1996-06-01
In this paper, uncertainty in recharge estimates is investigated relative to its impact on assessments of groundwater contamination vulnerability using a relatively simple pesticide mobility index, attenuation factor (AF). We employ a combination of first-order uncertainty analysis (FOUA) and sensitivity analysis to investigate recharge uncertainties for agricultural land on the island of O'ahu, Hawai'i, that is currently, or has been in the past, under sugarcane or pineapple cultivation. Uncertainty in recharge due to recharge component uncertainties is 49% of the mean for sugarcane and 58% of the mean for pineapple. The components contributing the largest amounts of uncertainty to the recharge estimate are irrigation in the case of sugarcane and precipitation in the case of pineapple. For a suite of pesticides formerly or currently used in the region, the contribution to AF uncertainty of recharge uncertainty was compared with the contributions of other AF components: retardation factor (RF), a measure of the effects of sorption; soil-water content at field capacity (ΘFC); and pesticide half-life (t1/2). Depending upon the pesticide, the contribution of recharge to uncertainty ranks second or third among the four AF components tested. The natural temporal variability of recharge is another source of uncertainty in AF, because the index is calculated using the time-averaged recharge rate. Relative to the mean, recharge variability is 10%, 44%, and 176% for the annual, monthly, and daily time scales, respectively, under sugarcane, and 31%, 112%, and 344%, respectively, under pineapple. In general, uncertainty in AF associated with temporal variability in recharge at all time scales exceeds AF. For chemicals such as atrazine or diuron under sugarcane, and atrazine or bromacil under pineapple, the range of AF uncertainty due to temporal variability in recharge encompasses significantly higher levels of leaching potential at some locations than that indicated by the AF estimate.
Dispersion modeling tools have traditionally provided critical information for air quality management decisions, but have been used recently to provide exposure estimates to support health studies. However, these models can be challenging to implement, particularly in near-road s...
USDA-ARS?s Scientific Manuscript database
Hydrologic models are used to simulate the responses of agricultural systems to different inputs and management strategies to identify alternative management practices to cope up with future climate and/or geophysical changes. The Agricultural Policy/Environmental eXtender (APEX) is a model develope...
Mission Assurance Analysis of Theater Ballistic Missile Defense Systems
2011-09-01
7 Figure 2. MOSAIC Management Paradigm ( Alberts et. al., 2008) .................................. 14 Figure 3. Basic...Success Profiles ( Alberts , et. al., 2008) ................................................... 16 Figure 4. Success Profile with Uncertainty Range... Alberts , et. al., 2008) ....................... 16 Figure 5. Success Profile with Event Sensitivity ( Alberts , et. al., 2008) .......................... 17
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Reuter, Bryan W.; Walker, Eric L.; Kleb, Bil; Park, Michael A.
2014-01-01
The primary objective of this work was to develop and demonstrate a process for accurate and efficient uncertainty quantification and certification prediction of low-boom, supersonic, transport aircraft. High-fidelity computational fluid dynamics models of multiple low-boom configurations were investigated including the Lockheed Martin SEEB-ALR body of revolution, the NASA 69 Delta Wing, and the Lockheed Martin 1021-01 configuration. A nonintrusive polynomial chaos surrogate modeling approach was used for reduced computational cost of propagating mixed, inherent (aleatory) and model-form (epistemic) uncertainty from both the computation fluid dynamics model and the near-field to ground level propagation model. A methodology has also been introduced to quantify the plausibility of a design to pass a certification under uncertainty. Results of this study include the analysis of each of the three configurations of interest under inviscid and fully turbulent flow assumptions. A comparison of the uncertainty outputs and sensitivity analyses between the configurations is also given. The results of this study illustrate the flexibility and robustness of the developed framework as a tool for uncertainty quantification and certification prediction of low-boom, supersonic aircraft.
Uncertainty quantification of crustal scale thermo-chemical properties in Southeast Australia
NASA Astrophysics Data System (ADS)
Mather, B.; Moresi, L. N.; Rayner, P. J.
2017-12-01
The thermo-chemical properties of the crust are essential to understanding the mechanical and thermal state of the lithosphere. The uncertainties associated with these parameters are connected to the available geophysical observations and a priori information to constrain the objective function. Often, it is computationally efficient to reduce the parameter space by mapping large portions of the crust into lithologies that have assumed homogeneity. However, the boundaries of these lithologies are, in themselves, uncertain and should also be included in the inverse problem. We assimilate geological uncertainties from an a priori geological model of Southeast Australia with geophysical uncertainties from S-wave tomography and 174 heat flow observations within an adjoint inversion framework. This reduces the computational cost of inverting high dimensional probability spaces, compared to probabilistic inversion techniques that operate in the `forward' mode, but at the sacrifice of uncertainty and covariance information. We overcome this restriction using a sensitivity analysis, that perturbs our observations and a priori information within their probability distributions, to estimate the posterior uncertainty of thermo-chemical parameters in the crust.
Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H
2017-03-01
To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.
Robustness of Thirty Meter Telescope primary mirror control
NASA Astrophysics Data System (ADS)
Macmynowski, Douglas G.; Thompson, Peter M.; Shelton, Chris; Roberts, Lewis C., Jr.
2010-07-01
The primary mirror control system for the Thirty Meter Telescope (TMT) maintains the alignment of the 492 segments in the presence of both quasi-static (gravity and thermal) and dynamic disturbances due to unsteady wind loads. The latter results in a desired control bandwidth of 1Hz at high spatial frequencies. The achievable bandwidth is limited by robustness to (i) uncertain telescope structural dynamics (control-structure interaction) and (ii) small perturbations in the ill-conditioned influence matrix that relates segment edge sensor response to actuator commands. Both of these effects are considered herein using models of TMT. The former is explored through multivariable sensitivity analysis on a reduced-order Zernike-basis representation of the structural dynamics. The interaction matrix ("A-matrix") uncertainty has been analyzed theoretically elsewhere, and is examined here for realistic amplitude perturbations due to segment and sensor installation errors, and gravity and thermal induced segment motion. The primary influence of A-matrix uncertainty is on the control of "focusmode"; this is the least observable mode, measurable only through the edge-sensor (gap-dependent) sensitivity to the dihedral angle between segments. Accurately estimating focus-mode will require updating the A-matrix as a function of the measured gap. A-matrix uncertainty also results in a higher gain-margin requirement for focus-mode, and hence the A-matrix and CSI robustness need to be understood simultaneously. Based on the robustness analysis, the desired 1 Hz bandwidth is achievable in the presence of uncertainty for all except the lowest spatial-frequency response patterns of the primary mirror.
Flassig, Robert J; Migal, Iryna; der Zalm, Esther van; Rihko-Struckmann, Liisa; Sundmacher, Kai
2015-01-16
Understanding the dynamics of biological processes can substantially be supported by computational models in the form of nonlinear ordinary differential equations (ODE). Typically, this model class contains many unknown parameters, which are estimated from inadequate and noisy data. Depending on the ODE structure, predictions based on unmeasured states and associated parameters are highly uncertain, even undetermined. For given data, profile likelihood analysis has been proven to be one of the most practically relevant approaches for analyzing the identifiability of an ODE structure, and thus model predictions. In case of highly uncertain or non-identifiable parameters, rational experimental design based on various approaches has shown to significantly reduce parameter uncertainties with minimal amount of effort. In this work we illustrate how to use profile likelihood samples for quantifying the individual contribution of parameter uncertainty to prediction uncertainty. For the uncertainty quantification we introduce the profile likelihood sensitivity (PLS) index. Additionally, for the case of several uncertain parameters, we introduce the PLS entropy to quantify individual contributions to the overall prediction uncertainty. We show how to use these two criteria as an experimental design objective for selecting new, informative readouts in combination with intervention site identification. The characteristics of the proposed multi-criterion objective are illustrated with an in silico example. We further illustrate how an existing practically non-identifiable model for the chlorophyll fluorescence induction in a photosynthetic organism, D. salina, can be rendered identifiable by additional experiments with new readouts. Having data and profile likelihood samples at hand, the here proposed uncertainty quantification based on prediction samples from the profile likelihood provides a simple way for determining individual contributions of parameter uncertainties to uncertainties in model predictions. The uncertainty quantification of specific model predictions allows identifying regions, where model predictions have to be considered with care. Such uncertain regions can be used for a rational experimental design to render initially highly uncertain model predictions into certainty. Finally, our uncertainty quantification directly accounts for parameter interdependencies and parameter sensitivities of the specific prediction.
Analyzing the quality robustness of chemotherapy plans with respect to model uncertainties.
Hoffmann, Anna; Scherrer, Alexander; Küfer, Karl-Heinz
2015-01-01
Mathematical models of chemotherapy planning problems contain various biomedical parameters, whose values are difficult to quantify and thus subject to some uncertainty. This uncertainty propagates into the therapy plans computed on these models, which poses the question of robustness to the expected therapy quality. This work introduces a combined approach for analyzing the quality robustness of plans in terms of dosing levels with respect to model uncertainties in chemotherapy planning. It uses concepts from multi-criteria decision making for studying parameters related to the balancing between the different therapy goals, and concepts from sensitivity analysis for the examination of parameters describing the underlying biomedical processes and their interplay. This approach allows for a profound assessment of a therapy plan, how stable its quality is with respect to parametric changes in the used mathematical model. Copyright © 2014 Elsevier Inc. All rights reserved.
Parameter uncertainty analysis of a biokinetic model of caesium
Li, W. B.; Klein, W.; Blanchardon, Eric; ...
2014-04-17
Parameter uncertainties for the biokinetic model of caesium (Cs) developed by Leggett et al. were inventoried and evaluated. The methods of parameter uncertainty analysis were used to assess the uncertainties of model predictions with the assumptions of model parameter uncertainties and distributions. Furthermore, the importance of individual model parameters was assessed by means of sensitivity analysis. The calculated uncertainties of model predictions were compared with human data of Cs measured in blood and in the whole body. It was found that propagating the derived uncertainties in model parameter values reproduced the range of bioassay data observed in human subjects atmore » different times after intake. The maximum ranges, expressed as uncertainty factors (UFs) (defined as a square root of ratio between 97.5th and 2.5th percentiles) of blood clearance, whole-body retention and urinary excretion of Cs predicted at earlier time after intake were, respectively: 1.5, 1.0 and 2.5 at the first day; 1.8, 1.1 and 2.4 at Day 10 and 1.8, 2.0 and 1.8 at Day 100; for the late times (1000 d) after intake, the UFs were increased to 43, 24 and 31, respectively. The model parameters of transfer rates between kidneys and blood, muscle and blood and the rate of transfer from kidneys to urinary bladder content are most influential to the blood clearance and to the whole-body retention of Cs. For the urinary excretion, the parameters of transfer rates from urinary bladder content to urine and from kidneys to urinary bladder content impact mostly. The implication and effect on the estimated equivalent and effective doses of the larger uncertainty of 43 in whole-body retention in the later time, say, after Day 500 will be explored in a successive work in the framework of EURADOS.« less
NASA Astrophysics Data System (ADS)
Nunnenmann, Elena; Fischer, Ulrich; Stieglitz, Robert
2017-09-01
An uncertainty analysis was performed for the tritium breeding ratio (TBR) of a fusion power plant of the European DEMO type using the MCSEN patch to the MCNP Monte Carlo code. The breeding blanket was of the type Helium Cooled Pebble Bed (HCPB), currently under development in the European Power Plant Physics and Technology (PPPT) programme for a fusion power demonstration reactor (DEMO). A suitable 3D model of the DEMO reactor with HCPB blanket modules, as routinely used for blanket design calculations, was employed. The nuclear cross-section data were taken from the JEFF-3.2 data library. For the uncertainty analysis, the isotopes H-1, Li-6, Li-7, Be-9, O-16, Si-28, Si-29, Si-30, Cr-52, Fe-54, Fe-56, Ni-58, W-182, W-183, W-184 and W-186 were considered. The covariance data were taken from JEFF-3.2 where available. Otherwise a combination of FENDL-2.1 for Li-7, EFF-3 for Be-9 and JENDL-3.2 for O-16 were compared with data from TENDL-2014. Another comparison was performed with covariance data from JEFF-3.3T1. The analyses show an overall uncertainty of ± 3.2% for the TBR when using JEFF-3.2 covariance data with the mentioned additions. When using TENDL-2014 covariance data as replacement, the uncertainty increases to ± 8.6%. For JEFF-3.3T1 the uncertainty result is ± 5.6%. The uncertainty is dominated by O-16, Li-6 and Li-7 cross-sections.
Use of a control film piece in radiochromic film dosimetry.
Aldelaijan, Saad; Alzorkany, Faisal; Moftah, Belal; Buzurovic, Ivan; Seuntjens, Jan; Tomic, Nada; Devic, Slobodan
2016-01-01
Radiochromic films change their color upon irradiation due to polymerization of the sensitive component embedded within the sensitive layer. However, agents, other than monitored radiation, can lead to a change in the color of the sensitive layer (temperature, humidity, UV light) that can be considered as a background signal and can be removed from the actual measurement by using a control film piece. In this work, we investigate the impact of the use of control film pieces on both accuracy and uncertainty of dose measured using radiochromic film based reference dosimetry protocol. We irradiated "control" film pieces (EBT3 GafChromic(TM) film model) to known doses in a range of 0.05-1 Gy, and five film pieces of the same size to 2, 5, 10, 15 and 20 Gy, considered to be "unknown" doses. Depending on a dose range, two approaches to incorporating control film piece were investigated: signal and dose corrected method. For dose values greater than 10 Gy, the increase in accuracy of 3% led to uncertainty loss of 5% by using dose corrected approach. At lower doses and signals of the order of 5%, we observed an increase in accuracy of 10% with a loss of uncertainty lower than 1% by using the corrected signal approach. Incorporation of the signal registered by the control film piece into dose measurement analysis should be a judgment call of the user based on a tradeoff between deemed accuracy and acceptable uncertainty for a given dose measurement. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Steinschneider, S.; Wi, S.; Brown, C. M.
2013-12-01
Flood risk management performance is investigated within the context of integrated climate and hydrologic modeling uncertainty to explore system robustness. The research question investigated is whether structural and hydrologic parameterization uncertainties are significant relative to other uncertainties such as climate change when considering water resources system performance. Two hydrologic models are considered, a conceptual, lumped parameter model that preserves the water balance and a physically-based model that preserves both water and energy balances. In the conceptual model, parameter and structural uncertainties are quantified and propagated through the analysis using a Bayesian modeling framework with an innovative error model. Mean climate changes and internal climate variability are explored using an ensemble of simulations from a stochastic weather generator. The approach presented can be used to quantify the sensitivity of flood protection adequacy to different sources of uncertainty in the climate and hydrologic system, enabling the identification of robust projects that maintain adequate performance despite the uncertainties. The method is demonstrated in a case study for the Coralville Reservoir on the Iowa River, where increased flooding over the past several decades has raised questions about potential impacts of climate change on flood protection adequacy.
Jensen, Cathrine Elgaard; Riis, Allan; Petersen, Karin Dam; Jensen, Martin Bach; Pedersen, Kjeld Møller
2017-05-01
In connection with the publication of a clinical practice guideline on the management of low back pain (LBP) in general practice in Denmark, a cluster randomised controlled trial was conducted. In this trial, a multifaceted guideline implementation strategy to improve general practitioners' treatment of patients with LBP was compared with a usual implementation strategy. The aim was to determine whether the multifaceted strategy was cost effective, as compared with the usual implementation strategy. The economic evaluation was conducted as a cost-utility analysis where cost collected from a societal perspective and quality-adjusted life years were used as outcome measures. The analysis was conducted as a within-trial analysis with a 12-month time horizon consistent with the follow-up period of the clinical trial. To adjust for a priori selected covariates, generalised linear models with a gamma family were used to estimate incremental costs and quality-adjusted life years. Furthermore, both deterministic and probabilistic sensitivity analyses were conducted. Results showed that costs associated with primary health care were higher, whereas secondary health care costs were lower for the intervention group when compared with the control group. When adjusting for covariates, the intervention was less costly, and there was no significant difference in effect between the 2 groups. Sensitivity analyses showed that results were sensitive to uncertainty. In conclusion, the multifaceted implementation strategy was cost saving when compared with the usual strategy for implementing LBP clinical practice guidelines in general practice. Furthermore, there was no significant difference in effect, and the estimate was sensitive to uncertainty.
User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Coleman, Kayla; Gilkey, Lindsay N.
Sandia’s Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility. In its simplest mode, Dakota can automate typical parameter variation studies through a generic interface to a physics-based computational model. This can lend efficiency and rigor to manual parameter perturbation studies already being conducted by analysts. However, Dakota also delivers advanced parametric analysis techniques enabling design exploration, optimization, model calibration, riskmore » analysis, and quantification of margins and uncertainty with such models. It directly supports verification and validation activities. Dakota algorithms enrich complex science and engineering models, enabling an analyst to answer crucial questions of - Sensitivity: Which are the most important input factors or parameters entering the simulation, and how do they influence key outputs?; Uncertainty: What is the uncertainty or variability in simulation output, given uncertainties in input parameters? How safe, reliable, robust, or variable is my system? (Quantification of margins and uncertainty, QMU); Optimization: What parameter values yield the best performing design or operating condition, given constraints? Calibration: What models and/or parameters best match experimental data? In general, Dakota is the Consortium for Advanced Simulation of Light Water Reactors (CASL) delivery vehicle for verification, validation, and uncertainty quantification (VUQ) algorithms. It permits ready application of the VUQ methods described above to simulation codes by CASL researchers, code developers, and application engineers.« less
Uncertainty in the evaluation of the Predicted Mean Vote index using Monte Carlo analysis.
Ricciu, R; Galatioto, A; Desogus, G; Besalduch, L A
2018-06-06
Today, evaluation of thermohygrometric indoor conditions is one of the most useful tools for building design and re-design and can be used to determine energy consumption in conditioned buildings. Since the beginning of the Predicted Mean Vote index (PMV), researchers have thoroughly investigated its issues in order to reach more accurate results; however, several shortcomings have yet to be solved. Among them is the uncertainty of environmental and subjective parameters linked to the standard PMV approach of ISO 7730 that classifies the thermal environment. To this end, this paper discusses the known thermal comfort models and the measurement approaches, paying particular attention to measurement uncertainties and their influence on PMV determination. Monte Carlo analysis has been applied on a data series in a "black-box" environment, and each involved parameter has been analysed in the PMV range from -0.9 to 0.9 under different Relative Humidity conditions. Furthermore, a sensitivity analysis has been performed in order to define the role of each variable. The results showed that an uncertainty propagation method could improve PMV model application, especially where it should be very accurate (-0.2 < PMV<0.2 range; winter season with Relative Humidity of 30%). Copyright © 2018 Elsevier Ltd. All rights reserved.
Statistical Performances of Resistive Active Power Splitter
NASA Astrophysics Data System (ADS)
Lalléchère, Sébastien; Ravelo, Blaise; Thakur, Atul
2016-03-01
In this paper, the synthesis and sensitivity analysis of an active power splitter (PWS) is proposed. It is based on the active cell composed of a Field Effect Transistor in cascade with shunted resistor at the input and the output (resistive amplifier topology). The PWS uncertainty versus resistance tolerances is suggested by using stochastic method. Furthermore, with the proposed topology, we can control easily the device gain while varying a resistance. This provides useful tool to analyse the statistical sensitivity of the system in uncertain environment.
NASA Astrophysics Data System (ADS)
Dimitriadis, Panayiotis; Tegos, Aristoteles; Oikonomou, Athanasios; Pagana, Vassiliki; Koukouvinos, Antonios; Mamassis, Nikos; Koutsoyiannis, Demetris; Efstratiadis, Andreas
2016-03-01
One-dimensional and quasi-two-dimensional hydraulic freeware models (HEC-RAS, LISFLOOD-FP and FLO-2d) are widely used for flood inundation mapping. These models are tested on a benchmark test with a mixed rectangular-triangular channel cross section. Using a Monte-Carlo approach, we employ extended sensitivity analysis by simultaneously varying the input discharge, longitudinal and lateral gradients and roughness coefficients, as well as the grid cell size. Based on statistical analysis of three output variables of interest, i.e. water depths at the inflow and outflow locations and total flood volume, we investigate the uncertainty enclosed in different model configurations and flow conditions, without the influence of errors and other assumptions on topography, channel geometry and boundary conditions. Moreover, we estimate the uncertainty associated to each input variable and we compare it to the overall one. The outcomes of the benchmark analysis are further highlighted by applying the three models to real-world flood propagation problems, in the context of two challenging case studies in Greece.
Uncertainty Analysis for the Miniaturized Laser Heterodyne Radiometer (mini-LHR)
NASA Technical Reports Server (NTRS)
Clarke, G. B.; Wilson E. L.; Miller, J. H.; Melroy, H. R.
2014-01-01
Presented here is a sensitivity analysis for the miniaturized laser heterodyne radiometer (mini-LHR). This passive, ground-based instrument measures carbon dioxide (CO2) in the atmospheric column and has been under development at NASA/GSFC since 2009. The goal of this development is to produce a low-cost, easily-deployable instrument that can extend current ground measurement networks in order to (1) validate column satellite observations, (2) provide coverage in regions of limited satellite observations, (3) target regions of interest such as thawing permafrost, and (4) support the continuity of a long-term climate record. In this paper an uncertainty analysis of the instrument performance is presented and compared with results from three sets of field measurements. The signal-to-noise ratio (SNR) and corresponding uncertainty for a single scan are calculated to be 329.4+/-1.3 by deploying error propagation through the equation governing the SNR. Reported is an absorbance noise of 0.0024 for 6 averaged scans of field data, for an instrument precision of approximately 0.2 ppmv for CO2.
Risk-Sensitivity in Sensorimotor Control
Braun, Daniel A.; Nagengast, Arne J.; Wolpert, Daniel M.
2011-01-01
Recent advances in theoretical neuroscience suggest that motor control can be considered as a continuous decision-making process in which uncertainty plays a key role. Decision-makers can be risk-sensitive with respect to this uncertainty in that they may not only consider the average payoff of an outcome, but also consider the variability of the payoffs. Although such risk-sensitivity is a well-established phenomenon in psychology and economics, it has been much less studied in motor control. In fact, leading theories of motor control, such as optimal feedback control, assume that motor behaviors can be explained as the optimization of a given expected payoff or cost. Here we review evidence that humans exhibit risk-sensitivity in their motor behaviors, thereby demonstrating sensitivity to the variability of “motor costs.” Furthermore, we discuss how risk-sensitivity can be incorporated into optimal feedback control models of motor control. We conclude that risk-sensitivity is an important concept in understanding individual motor behavior under uncertainty. PMID:21283556
Drake, Tom; Chalabi, Zaid; Coker, Richard
2015-02-01
Investment in pandemic preparedness is a long-term gamble, with the return on investment coming at an unknown point in the future. Many countries have chosen to stockpile key resources, and the number of pandemic economic evaluations has risen sharply since 2009. We assess the importance of uncertainty in time-to-pandemic (and associated discounting) in pandemic economic evaluation, a factor frequently neglected in the literature to-date. We use a probability tree model and Monte Carlo parameter sampling to consider the cost effectiveness of antiviral stockpiling in Cambodia under parameter uncertainty. Mean elasticity and mutual information (MI) are used to assess the importance of time-to-pandemic compared with other parameters. We also consider the sensitivity to choice of sampling distribution used to model time-to-pandemic uncertainty. Time-to-pandemic and discount rate are the primary drivers of sensitivity and uncertainty in pandemic cost effectiveness models. Base case cost effectiveness of antiviral stockpiling ranged between is US$112 and US$3599 per DALY averted using historical pandemic intervals for time-to-pandemic. The mean elasticities for time-to-pandemic and discount rate were greater than all other parameters. Similarly, the MI scores for time to pandemic and discount rate were greater than other parameters. Time-to-pandemic and discount rate were key drivers of uncertainty in cost-effectiveness results regardless of time-to-pandemic sampling distribution choice. Time-to-pandemic assumptions can "substantially" affect cost-effectiveness results and, in our model, is a greater contributor to uncertainty in cost-effectiveness results than any other parameter. We strongly recommend that cost-effectiveness models include probabilistic analysis of time-to-pandemic uncertainty. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2013; all rights reserved.
Helping Water Utilities Grapple with Climate Change
NASA Astrophysics Data System (ADS)
Yates, D.; Gracely, B.; Miller, K.
2008-12-01
The Water Research Foundation (WRF), serving the drinking water industry and the National Center for Atmospheric Research (NCAR) are collaborating on an effort to develop and implement locally-relevant, structured processes to help water utilities consider the impacts and adaptation options that climate variability and change might have on their water systems. Adopting a case-study approach, the structured process include 1) a problem definition phase, focused on identifying goals, information needs, utility vulnerabilities and possible adaptation options in the face of climate and hydrologic uncertainty; 2) developing and/or modifying system-specific Integrated Water Resource Management (IWRM) models and conducting sensitivity analysis to identify critical variables; 3) developing probabilistic climate change scenarios focused on exploring uncertainties identified as important in the sensitivity analysis in step 2; and 4) implementing the structured process and examining approaches decision making under uncertainty. Collaborators include seven drinking water utilities and two state agencies: 1) The Inland Empire Utility Agency, CA; 2) The El Dorado Irrigation District, Placerville CA; 2) Portland Water Bureau, Portland OR; 3) Colorado Springs Utilities, Colo Spgs, CO; 4) Cincinnati Water, Cincinnati, OH; 5) Massachusetts Water Resources Authority (MWRA), Boston, MA; 6) Durham Water, Durham, NC; and 7) Palm Beach County Water (PBCW), Palm Beach, FL. The California Department of Water Resources and the Colorado Water Conservation Board were the state agencies that we have collaborated with.
Effects of buffer size and shape on associations between the built environment and energy balance.
James, Peter; Berrigan, David; Hart, Jaime E; Hipp, J Aaron; Hoehner, Christine M; Kerr, Jacqueline; Major, Jacqueline M; Oka, Masayoshi; Laden, Francine
2014-05-01
Uncertainty in the relevant spatial context may drive heterogeneity in findings on the built environment and energy balance. To estimate the effect of this uncertainty, we conducted a sensitivity analysis defining intersection and business densities and counts within different buffer sizes and shapes on associations with self-reported walking and body mass index. Linear regression results indicated that the scale and shape of buffers influenced study results and may partly explain the inconsistent findings in the built environment and energy balance literature. Copyright © 2014 Elsevier Ltd. All rights reserved.
Space system operations and support cost analysis using Markov chains
NASA Technical Reports Server (NTRS)
Unal, Resit; Dean, Edwin B.; Moore, Arlene A.; Fairbairn, Robert E.
1990-01-01
This paper evaluates the use of Markov chain process in probabilistic life cycle cost analysis and suggests further uses of the process as a design aid tool. A methodology is developed for estimating operations and support cost and expected life for reusable space transportation systems. Application of the methodology is demonstrated for the case of a hypothetical space transportation vehicle. A sensitivity analysis is carried out to explore the effects of uncertainty in key model inputs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonaque, J; Bautista-Ballesteros, J; Ibanez-Rosello, B
Purpose: To estimate the sensitivity of TrueBeam 2.0 Imaging System 6DoF automatic matching tool through the acquisition of cone-beam CT images in different phantoms applying submillimeter translations and rotations of tenths of a degree and registered with image simulation CT. Methods: To evaluate overall system-wide image, we consider two uncertainties source; First, the uncertainty of the manual phantom displacement (ε-m). This uncertainty is calculated by a digital caliper (0.01 mm) for vertical (Vrt), lateral (Lat) and longitudinal (Lng). A digital inclinometer (0.01°) for the pitch and roll and the own phantom scale to evaluate the coordinate rotation (Rtn). The secondmore » uncertainty is the displacement detected by the algorithm system of matching (σ-d) that we obtain from the standard deviations of the different measurements. We use three different phantoms. The BrainLab Radiosurgery system for supporting masks with an anthropomorphic dummy adapted to allow displacements of 0.1 mm in Vrt, Lat and Lng dimensions and rotations of 0.1° in Pitch dimension. For the analysis of the Rtn and Roll dimensions we use two homemade phantoms (RinoRot and RinoRoll, La Fe Hospital, Valencia, Spain) that allow rotations of 0.3°. Results: In the case of manual displacement of 0.10 ± 0.03 mm in the translations, the system detect 0.10 ± 0.07 mm, 0.12 ± 0.07 mm and 0.13 ± 0.07 mm (mean ± SD) in Lat, Vrt and Lng respectively. In the case of rotational dimension, manual displacement of 0.3 ± 0.1° was detected with 0.19 ± 0.06°, 0.29 ± 0.03° and 0.27 ± 0.06° in Pitch, Roll and Rtn. Conclusion: We conclude that the sensitivity of the automatic matching system is within 0.10 mm in translations and 0.3° in rotations. These values are under the own sensitivity of the software.« less
The Value of Information in Decision-Analytic Modeling for Malaria Vector Control in East Africa.
Kim, Dohyeong; Brown, Zachary; Anderson, Richard; Mutero, Clifford; Miranda, Marie Lynn; Wiener, Jonathan; Kramer, Randall
2017-02-01
Decision analysis tools and mathematical modeling are increasingly emphasized in malaria control programs worldwide to improve resource allocation and address ongoing challenges with sustainability. However, such tools require substantial scientific evidence, which is costly to acquire. The value of information (VOI) has been proposed as a metric for gauging the value of reduced model uncertainty. We apply this concept to an evidenced-based Malaria Decision Analysis Support Tool (MDAST) designed for application in East Africa. In developing MDAST, substantial gaps in the scientific evidence base were identified regarding insecticide resistance in malaria vector control and the effectiveness of alternative mosquito control approaches, including larviciding. We identify four entomological parameters in the model (two for insecticide resistance and two for larviciding) that involve high levels of uncertainty and to which outputs in MDAST are sensitive. We estimate and compare a VOI for combinations of these parameters in evaluating three policy alternatives relative to a status quo policy. We find having perfect information on the uncertain parameters could improve program net benefits by up to 5-21%, with the highest VOI associated with jointly eliminating uncertainty about reproductive speed of malaria-transmitting mosquitoes and initial efficacy of larviciding at reducing the emergence of new adult mosquitoes. Future research on parameter uncertainty in decision analysis of malaria control policy should investigate the VOI with respect to other aspects of malaria transmission (such as antimalarial resistance), the costs of reducing uncertainty in these parameters, and the extent to which imperfect information about these parameters can improve payoffs. © 2016 Society for Risk Analysis.
Probabilistic and deterministic evaluation of uncertainty in a local scale multi-risk analysis
NASA Astrophysics Data System (ADS)
Lari, S.; Frattini, P.; Crosta, G. B.
2009-04-01
We performed a probabilistic multi-risk analysis (QPRA) at the local scale for a 420 km2 area surrounding the town of Brescia (Northern Italy). We calculated the expected annual loss in terms of economical damage and life loss, for a set of risk scenarios of flood, earthquake and industrial accident with different occurrence probabilities and different intensities. The territorial unit used for the study was the census parcel, of variable area, for which a large amount of data was available. Due to the lack of information related to the evaluation of the hazards, to the value of the exposed elements (e.g., residential and industrial area, population, lifelines, sensitive elements as schools, hospitals) and to the process-specific vulnerability, and to a lack of knowledge of the processes (floods, industrial accidents, earthquakes), we assigned an uncertainty to the input variables of the analysis. For some variables an homogeneous uncertainty was assigned on the whole study area, as for instance for the number of buildings of various typologies, and for the event occurrence probability. In other cases, as for phenomena intensity (e.g.,depth of water during flood) and probability of impact, the uncertainty was defined in relation to the census parcel area. In fact assuming some variables homogeneously diffused or averaged on the census parcels, we introduce a larger error for larger parcels. We propagated the uncertainty in the analysis using three different models, describing the reliability of the output (risk) as a function of the uncertainty of the inputs (scenarios and vulnerability functions). We developed a probabilistic approach based on Monte Carlo simulation, and two deterministic models, namely First Order Second Moment (FOSM) and Point Estimate (PE). In general, similar values of expected losses are obtained with the three models. The uncertainty of the final risk value is in the three cases around the 30% of the expected value. Each of the models, nevertheless, requires different assumptions and computational efforts, and provides results with different level of detail.
NASA Astrophysics Data System (ADS)
Sun, Guodong; Mu, Mu
2017-05-01
An important source of uncertainty, which causes further uncertainty in numerical simulations, is that residing in the parameters describing physical processes in numerical models. Therefore, finding a subset among numerous physical parameters in numerical models in the atmospheric and oceanic sciences, which are relatively more sensitive and important parameters, and reducing the errors in the physical parameters in this subset would be a far more efficient way to reduce the uncertainties involved in simulations. In this context, we present a new approach based on the conditional nonlinear optimal perturbation related to parameter (CNOP-P) method. The approach provides a framework to ascertain the subset of those relatively more sensitive and important parameters among the physical parameters. The Lund-Potsdam-Jena (LPJ) dynamical global vegetation model was utilized to test the validity of the new approach in China. The results imply that nonlinear interactions among parameters play a key role in the identification of sensitive parameters in arid and semi-arid regions of China compared to those in northern, northeastern, and southern China. The uncertainties in the numerical simulations were reduced considerably by reducing the errors of the subset of relatively more sensitive and important parameters. The results demonstrate that our approach not only offers a new route to identify relatively more sensitive and important physical parameters but also that it is viable to then apply "target observations" to reduce the uncertainties in model parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shahnam, Mehrdad; Gel, Aytekin; Subramaniyan, Arun K.
Adequate assessment of the uncertainties in modeling and simulation is becoming an integral part of the simulation based engineering design. The goal of this study is to demonstrate the application of non-intrusive Bayesian uncertainty quantification (UQ) methodology in multiphase (gas-solid) flows with experimental and simulation data, as part of our research efforts to determine the most suited approach for UQ of a bench scale fluidized bed gasifier. UQ analysis was first performed on the available experimental data. Global sensitivity analysis performed as part of the UQ analysis shows that among the three operating factors, steam to oxygen ratio has themore » most influence on syngas composition in the bench-scale gasifier experiments. An analysis for forward propagation of uncertainties was performed and results show that an increase in steam to oxygen ratio leads to an increase in H2 mole fraction and a decrease in CO mole fraction. These findings are in agreement with the ANOVA analysis performed in the reference experimental study. Another contribution in addition to the UQ analysis is the optimization-based approach to guide to identify next best set of additional experimental samples, should the possibility arise for additional experiments. Hence, the surrogate models constructed as part of the UQ analysis is employed to improve the information gain and make incremental recommendation, should the possibility to add more experiments arise. In the second step, series of simulations were carried out with the open-source computational fluid dynamics software MFiX to reproduce the experimental conditions, where three operating factors, i.e., coal flow rate, coal particle diameter, and steam-to-oxygen ratio, were systematically varied to understand their effect on the syngas composition. Bayesian UQ analysis was performed on the numerical results. As part of Bayesian UQ analysis, a global sensitivity analysis was performed based on the simulation results, which shows that the predicted syngas composition is strongly affected not only by the steam-to-oxygen ratio (which was observed in experiments as well) but also by variation in the coal flow rate and particle diameter (which was not observed in experiments). The carbon monoxide mole fraction is underpredicted at lower steam-to-oxygen ratios and overpredicted at higher steam-to-oxygen ratios. The opposite trend is observed for the carbon dioxide mole fraction. These discrepancies are attributed to either excessive segregation of the phases that leads to the fuel-rich or -lean regions or alternatively the selection of reaction models, where different reaction models and kinetics can lead to different syngas compositions throughout the gasifier. To improve quality of numerical models used, the effect that uncertainties in reaction models for gasification, char oxidation, carbon monoxide oxidation, and water gas shift will have on the syngas composition at different grid resolution, along with bed temperature were investigated. The global sensitivity analysis showed that among various reaction models employed for water gas shift, gasification, char oxidation, the choice of reaction model for water gas shift has the greatest influence on syngas composition, with gasification reaction model being second. Syngas composition also shows a small sensitivity to temperature of the bed. The hydrodynamic behavior of the bed did not change beyond grid spacing of 18 times the particle diameter. However, the syngas concentration continued to be affected by the grid resolution as low as 9 times the particle diameter. This is due to a better resolution of the phasic interface between the gases and solid that leads to stronger heterogeneous reactions. This report is a compilation of three manuscripts published in peer-reviewed journals for the series of studies mentioned above.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T.; Jessee, Matthew Anderson
The SCALE Code System is a widely used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including 3 deterministic and 3 Monte Carlomore » radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results. SCALE 6.2 represents one of the most comprehensive revisions in the history of SCALE, providing several new capabilities and significant improvements in many existing features.« less
ERIC Educational Resources Information Center
Uljarevic, Mirko; Carrington, Sarah; Leekam, Susan
2016-01-01
This study examined the relations between anxiety and individual characteristics of sensory sensitivity (SS) and intolerance of uncertainty (IU) in mothers of children with ASD. The mothers of 50 children completed the Hospital Anxiety and Depression Scale, the Highly Sensitive Person Scale and the IU Scale. Anxiety was associated with both SS and…
Using the Monte Carlo (MC) method, this paper derives arithmetic and geometric means and associated variances of the net capillary drive parameter, G, that appears in the Parlange infiltration model, as a function of soil texture and antecedent soil moisture content. App...
An earlier paper (Hattis et al., 2003) developed a quantitative likelihood-based statistical analysis of the differences in apparent sensitivity of rodents to mutagenic carcinogens across three life stages (fetal, birth-weaning, and weaning-60 days) relative to exposures in adult...
Reconciling uncertain costs and benefits in bayes nets for invasive species management
Burgman, M.A.; Wintle, B.A.; Thompson, C.A.; Moilanen, A.; Runge, M.C.; Ben-Haim, Y.
2010-01-01
Bayes nets are used increasingly to characterize environmental systems and formalize probabilistic reasoning to support decision making. These networks treat probabilities as exact quantities. Sensitivity analysis can be used to evaluate the importance of assumptions and parameter estimates. Here, we outline an application of info-gap theory to Bayes nets that evaluates the sensitivity of decisions to possibly large errors in the underlying probability estimates and utilities. We apply it to an example of management and eradication of Red Imported Fire Ants in Southern Queensland, Australia and show how changes in management decisions can be justified when uncertainty is considered. ?? 2009 Society for Risk Analysis.
Probabilistic sizing of laminates with uncertainties
NASA Technical Reports Server (NTRS)
Shah, A. R.; Liaw, D. G.; Chamis, C. C.
1993-01-01
A reliability based design methodology for laminate sizing and configuration for a special case of composite structures is described. The methodology combines probabilistic composite mechanics with probabilistic structural analysis. The uncertainties of constituent materials (fiber and matrix) to predict macroscopic behavior are simulated using probabilistic theory. Uncertainties in the degradation of composite material properties are included in this design methodology. A multi-factor interaction equation is used to evaluate load and environment dependent degradation of the composite material properties at the micromechanics level. The methodology is integrated into a computer code IPACS (Integrated Probabilistic Assessment of Composite Structures). Versatility of this design approach is demonstrated by performing a multi-level probabilistic analysis to size the laminates for design structural reliability of random type structures. The results show that laminate configurations can be selected to improve the structural reliability from three failures in 1000, to no failures in one million. Results also show that the laminates with the highest reliability are the least sensitive to the loading conditions.
Oldenburg, Curtis M.; Freifeld, Barry M.; Pruess, Karsten; Pan, Lehua; Finsterle, Stefan; Moridis, George J.
2012-01-01
In response to the urgent need for estimates of the oil and gas flow rate from the Macondo well MC252-1 blowout, we assembled a small team and carried out oil and gas flow simulations using the TOUGH2 codes over two weeks in mid-2010. The conceptual model included the oil reservoir and the well with a top boundary condition located at the bottom of the blowout preventer. We developed a fluid properties module (Eoil) applicable to a simple two-phase and two-component oil-gas system. The flow of oil and gas was simulated using T2Well, a coupled reservoir-wellbore flow model, along with iTOUGH2 for sensitivity analysis and uncertainty quantification. The most likely oil flow rate estimated from simulations based on the data available in early June 2010 was about 100,000 bbl/d (barrels per day) with a corresponding gas flow rate of 300 MMscf/d (million standard cubic feet per day) assuming the well was open to the reservoir over 30 m of thickness. A Monte Carlo analysis of reservoir and fluid properties provided an uncertainty distribution with a long tail extending down to 60,000 bbl/d of oil (170 MMscf/d of gas). The flow rate was most strongly sensitive to reservoir permeability. Conceptual model uncertainty was also significant, particularly with regard to the length of the well that was open to the reservoir. For fluid-entry interval length of 1.5 m, the oil flow rate was about 56,000 bbl/d. Sensitivity analyses showed that flow rate was not very sensitive to pressure-drop across the blowout preventer due to the interplay between gas exsolution and oil flow rate. PMID:21730177
NASA Astrophysics Data System (ADS)
Unger, André J. A.
2010-02-01
This work is the second installment in a two-part series, and focuses on object-oriented programming methods to implement an augmented-state variable approach to aggregate the PCS index and introduce the Bermudan-style call feature into the proposed CAT bond model. The PCS index is aggregated quarterly using a discrete Asian running-sum formulation. The resulting aggregate PCS index augmented-state variable is used to specify the payoff (principle) on the CAT bond based on reinsurance layers. The purpose of the Bermudan-style call option is to allow the reinsurer to minimize their interest rate risk exposure on making fixed coupon payments under prevailing interest rates. A sensitivity analysis is performed to determine the impact of uncertainty in the frequency and magnitude of hurricanes on the price of the CAT bond. Results indicate that while the CAT bond is highly sensitive to the natural variability in the frequency of landfalling hurricanes between El Ninõ and non-El Ninõ years, it remains relatively insensitive to uncertainty in the magnitude of damages. In addition, results indicate that the maximum price of the CAT bond is insensitive to whether it is engineered to cover low frequency high magnitude events in a 'high' reinsurance layer relative to high frequency low magnitude events in a 'low' reinsurance layer. Also, while it is possible for the reinsurer to minimize their interest rate risk exposure on the fixed coupon payments, the impact of this risk on the price of the CAT bond appears small relative to the natural variability in the CAT bond price, and consequently catastrophic risk, due to uncertainty in the frequency and magnitude of landfalling hurricanes.
NASA Astrophysics Data System (ADS)
Yi, Yonghong; Kimball, John S.; Chen, Richard H.; Moghaddam, Mahta; Reichle, Rolf H.; Mishra, Umakant; Zona, Donatella; Oechel, Walter C.
2018-01-01
An important feature of the Arctic is large spatial heterogeneity in active layer conditions, which is generally poorly represented by global models and can lead to large uncertainties in predicting regional ecosystem responses and climate feedbacks. In this study, we developed a spatially integrated modeling and analysis framework combining field observations, local-scale ( ˜ 50 m resolution) active layer thickness (ALT) and soil moisture maps derived from low-frequency (L + P-band) airborne radar measurements, and global satellite environmental observations to investigate the ALT sensitivity to recent climate trends and landscape heterogeneity in Alaska. Modeled ALT results show good correspondence with in situ measurements in higher-permafrost-probability (PP ≥ 70 %) areas (n = 33; R = 0.60; mean bias = 1.58 cm; RMSE = 20.32 cm), but with larger uncertainty in sporadic and discontinuous permafrost areas. The model results also reveal widespread ALT deepening since 2001, with smaller ALT increases in northern Alaska (mean trend = 0.32±1.18 cm yr-1) and much larger increases (> 3 cm yr-1) across interior and southern Alaska. The positive ALT trend coincides with regional warming and a longer snow-free season (R = 0.60 ± 0.32). A spatially integrated analysis of the radar retrievals and model sensitivity simulations demonstrated that uncertainty in the spatial and vertical distribution of soil organic carbon (SOC) was the largest factor affecting modeled ALT accuracy, while soil moisture played a secondary role. Potential improvements in characterizing SOC heterogeneity, including better spatial sampling of soil conditions and advances in remote sensing of SOC and soil moisture, will enable more accurate predictions of active layer conditions and refinement of the modeling framework across a larger domain.
Instrumentation and Performance Analysis Plans for the HIFiRE Flight 2 Experiment
NASA Technical Reports Server (NTRS)
Gruber, Mark; Barhorst, Todd; Jackson, Kevin; Eklund, Dean; Hass, Neal; Storch, Andrea M.; Liu, Jiwen
2009-01-01
Supersonic combustion performance of a bi-component gaseous hydrocarbon fuel mixture is one of the primary aspects under investigation in the HIFiRE Flight 2 experiment. In-flight instrumentation and post-test analyses will be two key elements used to determine the combustion performance. Pre-flight computational fluid dynamics (CFD) analyses provide valuable information that can be used to optimize the placement of a constrained set of wall pressure instrumentation in the experiment. The simulations also allow pre-flight assessments of performance sensitivities leading to estimates of overall uncertainty in the determination of combustion efficiency. Based on the pre-flight CFD results, 128 wall pressure sensors have been located throughout the isolator/combustor flowpath to minimize the error in determining the wall pressure force at Mach 8 flight conditions. Also, sensitivity analyses show that mass capture and combustor exit stream thrust are the two primary contributors to uncertainty in combustion efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Babendreier, Justin E.; Castleton, Karl J.
2005-08-01
Elucidating uncertainty and sensitivity structures in environmental models can be a difficult task, even for low-order, single-medium constructs driven by a unique set of site-specific data. Quantitative assessment of integrated, multimedia models that simulate hundreds of sites, spanning multiple geographical and ecological regions, will ultimately require a comparative approach using several techniques, coupled with sufficient computational power. The Framework for Risk Analysis in Multimedia Environmental Systems - Multimedia, Multipathway, and Multireceptor Risk Assessment (FRAMES-3MRA) is an important software model being developed by the United States Environmental Protection Agency for use in risk assessment of hazardous waste management facilities. The 3MRAmore » modeling system includes a set of 17 science modules that collectively simulate release, fate and transport, exposure, and risk associated with hazardous contaminants disposed of in land-based waste management units (WMU) .« less
Scale/TSUNAMI Sensitivity Data for ICSBEP Evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Reed, Davis Allan; Lefebvre, Robert A
2011-01-01
The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) software developed at Oak Ridge National Laboratory (ORNL) as part of the Scale code system provide unique methods for code validation, gap analysis, and experiment design. For TSUNAMI analysis, sensitivity data are generated for each application and each existing or proposed experiment used in the assessment. The validation of diverse sets of applications requires potentially thousands of data files to be maintained and organized by the user, and a growing number of these files are available through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) distributed through themore » International Criticality Safety Benchmark Evaluation Program (ICSBEP). To facilitate the use of the IHECSBE benchmarks in rigorous TSUNAMI validation and gap analysis techniques, ORNL generated SCALE/TSUNAMI sensitivity data files (SDFs) for several hundred benchmarks for distribution with the IHECSBE. For the 2010 edition of IHECSBE, the sensitivity data were generated using 238-group cross-section data based on ENDF/B-VII.0 for 494 benchmark experiments. Additionally, ORNL has developed a quality assurance procedure to guide the generation of Scale inputs and sensitivity data, as well as a graphical user interface to facilitate the use of sensitivity data in identifying experiments and applying them in validation studies.« less
Briggs, Andrew H; Ades, A E; Price, Martin J
2003-01-01
In structuring decision models of medical interventions, it is commonly recommended that only 2 branches be used for each chance node to avoid logical inconsistencies that can arise during sensitivity analyses if the branching probabilities do not sum to 1. However, information may be naturally available in an unconditional form, and structuring a tree in conditional form may complicate rather than simplify the sensitivity analysis of the unconditional probabilities. Current guidance emphasizes using probabilistic sensitivity analysis, and a method is required to provide probabilistic probabilities over multiple branches that appropriately represents uncertainty while satisfying the requirement that mutually exclusive event probabilities should sum to 1. The authors argue that the Dirichlet distribution, the multivariate equivalent of the beta distribution, is appropriate for this purpose and illustrate its use for generating a fully probabilistic transition matrix for a Markov model. Furthermore, they demonstrate that by adopting a Bayesian approach, the problem of observing zero counts for transitions of interest can be overcome.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bostelmann, Friederike; Strydom, Gerhard; Reitsma, Frederik
The quantification of uncertainties in design and safety analysis of reactors is today not only broadly accepted, but in many cases became the preferred way to replace traditional conservative analysis for safety and licensing analysis. The use of a more fundamental methodology is also consistent with the reliable high fidelity physics models and robust, efficient, and accurate codes available today. To facilitate uncertainty analysis applications a comprehensive approach and methodology must be developed and applied, in contrast to the historical approach where sensitivity analysis were performed and uncertainties then determined by a simplified statistical combination of a few important inputmore » parameters. New methodologies are currently under development in the OECD/NEA Light Water Reactor (LWR) Uncertainty Analysis in Best-Estimate Modelling (UAM) benchmark activity. High Temperature Gas-cooled Reactor (HTGR) designs require specific treatment of the double heterogeneous fuel design and large graphite quantities at high temperatures. The IAEA has therefore launched a Coordinated Research Project (CRP) on HTGR Uncertainty Analysis in Modelling (UAM) in 2013 to study uncertainty propagation specifically in the HTGR analysis chain. Two benchmark problems are defined, with the prismatic design represented by the General Atomics (GA) MHTGR-350 and a 250 MW modular pebble bed design similar to the Chinese HTR-PM. Work has started on the first phase and the current CRP status is reported in the paper. A comparison of the Serpent and SCALE/KENO-VI reference Monte Carlo results for Ex. I-1 of the MHTGR-350 design is also included. It was observed that the SCALE/KENO-VI Continuous Energy (CE) k ∞ values were 395 pcm (Ex. I-1a) to 803 pcm (Ex. I-1b) higher than the respective Serpent lattice calculations, and that within the set of the SCALE results, the KENO-VI 238 Multi-Group (MG) k ∞ values were up to 800 pcm lower than the KENO-VI CE values. The use of the latest ENDF-B-VII.1 cross section library in Serpent lead to ~180 pcm lower k ∞ values compared to the older ENDF-B-VII.0 dataset, caused by the modified graphite neutron capture cross section. Furthermore, the fourth beta release of SCALE 6.2 likewise produced lower CE k∞ values when compared to SCALE 6.1, and the improved performance of the new 252-group library available in SCALE 6.2 is especially noteworthy. A SCALE/TSUNAMI uncertainty analysis of the Hot Full Power variant for Ex. I-1a furthermore concluded that the 238U(n,γ) (capture) and 235U(View the MathML source) cross-section covariance matrices contributed the most to the total k ∞ uncertainty of 0.58%.« less
Bostelmann, Friederike; Strydom, Gerhard; Reitsma, Frederik; ...
2016-01-11
The quantification of uncertainties in design and safety analysis of reactors is today not only broadly accepted, but in many cases became the preferred way to replace traditional conservative analysis for safety and licensing analysis. The use of a more fundamental methodology is also consistent with the reliable high fidelity physics models and robust, efficient, and accurate codes available today. To facilitate uncertainty analysis applications a comprehensive approach and methodology must be developed and applied, in contrast to the historical approach where sensitivity analysis were performed and uncertainties then determined by a simplified statistical combination of a few important inputmore » parameters. New methodologies are currently under development in the OECD/NEA Light Water Reactor (LWR) Uncertainty Analysis in Best-Estimate Modelling (UAM) benchmark activity. High Temperature Gas-cooled Reactor (HTGR) designs require specific treatment of the double heterogeneous fuel design and large graphite quantities at high temperatures. The IAEA has therefore launched a Coordinated Research Project (CRP) on HTGR Uncertainty Analysis in Modelling (UAM) in 2013 to study uncertainty propagation specifically in the HTGR analysis chain. Two benchmark problems are defined, with the prismatic design represented by the General Atomics (GA) MHTGR-350 and a 250 MW modular pebble bed design similar to the Chinese HTR-PM. Work has started on the first phase and the current CRP status is reported in the paper. A comparison of the Serpent and SCALE/KENO-VI reference Monte Carlo results for Ex. I-1 of the MHTGR-350 design is also included. It was observed that the SCALE/KENO-VI Continuous Energy (CE) k ∞ values were 395 pcm (Ex. I-1a) to 803 pcm (Ex. I-1b) higher than the respective Serpent lattice calculations, and that within the set of the SCALE results, the KENO-VI 238 Multi-Group (MG) k ∞ values were up to 800 pcm lower than the KENO-VI CE values. The use of the latest ENDF-B-VII.1 cross section library in Serpent lead to ~180 pcm lower k ∞ values compared to the older ENDF-B-VII.0 dataset, caused by the modified graphite neutron capture cross section. Furthermore, the fourth beta release of SCALE 6.2 likewise produced lower CE k∞ values when compared to SCALE 6.1, and the improved performance of the new 252-group library available in SCALE 6.2 is especially noteworthy. A SCALE/TSUNAMI uncertainty analysis of the Hot Full Power variant for Ex. I-1a furthermore concluded that the 238U(n,γ) (capture) and 235U(View the MathML source) cross-section covariance matrices contributed the most to the total k ∞ uncertainty of 0.58%.« less
Reducing uncertainty in Climate Response Time Scale by Bayesian Analysis of the 8.2 ka event
NASA Astrophysics Data System (ADS)
Lorenz, A.; Held, H.; Bauer, E.; Schneider von Deimling, T.
2009-04-01
We analyze the possibility of uncertainty reduction in Climate Response Time Scale by utilizing Greenland ice-core data that contain the 8.2 ka event within a Bayesian model-data intercomparison with the Earth system model of intermediate complexity, CLIMBER-2.3. Within a stochastic version of the model it has been possible to mimic the 8.2 ka event within a plausible experimental setting and with relatively good accuracy considering the timing of the event in comparison to other modeling exercises [1]. The simulation of the centennial cold event is effectively determined by the oceanic cooling rate which depends largely on the ocean diffusivity described by diffusion coefficients of relatively wide uncertainty ranges. The idea now is to discriminate between the different values of diffusivities according to their likelihood to rightly represent the duration of the 8.2 ka event and thus to exploit the paleo data to constrain uncertainty in model parameters in analogue to [2]. Implementing this inverse Bayesian Analysis with this model the technical difficulty arises to establish the related likelihood numerically in addition to the uncertain model parameters: While mainstream uncertainty analyses can assume a quasi-Gaussian shape of likelihood, with weather fluctuating around a long term mean, the 8.2 ka event as a highly nonlinear effect precludes such an a priori assumption. As a result of this study [3] the Bayesian Analysis showed a reduction of uncertainty in vertical ocean diffusivity parameters of factor 2 compared to prior knowledge. This learning effect on the model parameters is propagated to other model outputs of interest; e.g. the inverse ocean heat capacity, which is important for the dominant time scale of climate response to anthropogenic forcing which, in combination with climate sensitivity, strongly influences the climate systems reaction for the near- and medium-term future. 1 References [1] E. Bauer, A. Ganopolski, M. Montoya: Simulation of the cold climate event 8200 years ago by meltwater outburst from lake Agassiz. Paleoceanography 19:PA3014, (2004) [2] T. Schneider von Deimling, H. Held, A. Ganopolski, S. Rahmstorf, Climate sensitivity estimated from ensemble simulations of glacial climates, Climate Dynamics 27, 149-163, DOI 10.1007/s00382-006-0126-8 (2006). [3] A. Lorenz, Diploma Thesis, U Potsdam (2007).
Sufficiently elucidating uncertainty and sensitivity structures in environmental models can be a difficult task, even for low-order, single-media constructs driven by a unique set of site-specific data. The ensuing challenge of examining ever more complex, integrated, higher-ord...
Elucidating uncertainty and sensitivity structures in environmental models can be a difficult task, even for low-order, single-medium constructs driven by a unique set of site-specific data. Quantitative assessment of integrated, multimedia models that simulate hundreds of sites...
Uncertainty Quantification of Turbulence Model Closure Coefficients for Transonic Wall-Bounded Flows
NASA Technical Reports Server (NTRS)
Schaefer, John; West, Thomas; Hosder, Serhat; Rumsey, Christopher; Carlson, Jan-Renee; Kleb, William
2015-01-01
The goal of this work was to quantify the uncertainty and sensitivity of commonly used turbulence models in Reynolds-Averaged Navier-Stokes codes due to uncertainty in the values of closure coefficients for transonic, wall-bounded flows and to rank the contribution of each coefficient to uncertainty in various output flow quantities of interest. Specifically, uncertainty quantification of turbulence model closure coefficients was performed for transonic flow over an axisymmetric bump at zero degrees angle of attack and the RAE 2822 transonic airfoil at a lift coefficient of 0.744. Three turbulence models were considered: the Spalart-Allmaras Model, Wilcox (2006) k-w Model, and the Menter Shear-Stress Trans- port Model. The FUN3D code developed by NASA Langley Research Center was used as the flow solver. The uncertainty quantification analysis employed stochastic expansions based on non-intrusive polynomial chaos as an efficient means of uncertainty propagation. Several integrated and point-quantities are considered as uncertain outputs for both CFD problems. All closure coefficients were treated as epistemic uncertain variables represented with intervals. Sobol indices were used to rank the relative contributions of each closure coefficient to the total uncertainty in the output quantities of interest. This study identified a number of closure coefficients for each turbulence model for which more information will reduce the amount of uncertainty in the output significantly for transonic, wall-bounded flows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Namhata, Argha; Oladyshkin, Sergey; Dilmore, Rober
Carbon dioxide (CO2) storage into geological formations is regarded as an important mitigation strategy for anthropogenic CO2 emissions to the atmosphere. This study first simulates the leakage of CO2 and brine from a storage reservoir through the caprock. Then, we estimate the resulting pressure changes at the zone overlying the caprock also known as Above Zone Monitoring Interval (AZMI). A data-driven approach of arbitrary Polynomial Chaos (aPC) Expansion is then used to quantify the uncertainty in the above zone pressure prediction based on the uncertainties in different geologic parameters. Finally, a global sensitivity analysis is performed with Sobol indices basedmore » on the aPC technique to determine the relative importance of different parameters on pressure prediction. The results indicate that there can be uncertainty in pressure prediction locally around the leakage zones. The degree of such uncertainty in prediction depends on the quality of site specific information available for analysis. The scientific results from this study provide substantial insight that there is a need for site-specific data for efficient predictions of risks associated with storage activities. The presented approach can provide a basis of optimized pressure based monitoring network design at carbon storage sites.« less
Probabilistic Assessment of Above Zone Pressure Predictions at a Geologic Carbon Storage Site
Namhata, Argha; Oladyshkin, Sergey; Dilmore, Robert M.; Zhang, Liwei; Nakles, David V.
2016-01-01
Carbon dioxide (CO2) storage into geological formations is regarded as an important mitigation strategy for anthropogenic CO2 emissions to the atmosphere. This study first simulates the leakage of CO2 and brine from a storage reservoir through the caprock. Then, we estimate the resulting pressure changes at the zone overlying the caprock also known as Above Zone Monitoring Interval (AZMI). A data-driven approach of arbitrary Polynomial Chaos (aPC) Expansion is then used to quantify the uncertainty in the above zone pressure prediction based on the uncertainties in different geologic parameters. Finally, a global sensitivity analysis is performed with Sobol indices based on the aPC technique to determine the relative importance of different parameters on pressure prediction. The results indicate that there can be uncertainty in pressure prediction locally around the leakage zones. The degree of such uncertainty in prediction depends on the quality of site specific information available for analysis. The scientific results from this study provide substantial insight that there is a need for site-specific data for efficient predictions of risks associated with storage activities. The presented approach can provide a basis of optimized pressure based monitoring network design at carbon storage sites. PMID:27996043
Assessment of Uncertainty in the Determination of Activation Energy for Polymeric Materials
NASA Technical Reports Server (NTRS)
Darby, Stephania P.; Landrum, D. Brian; Coleman, Hugh W.
1998-01-01
An assessment of the experimental uncertainty in obtaining the kinetic activation energy from thermogravimetric analysis (TGA) data is presented. A neat phenolic resin, Borden SC1O08, was heated at three heating rates to obtain weight loss vs temperature data. Activation energy was calculated by two methods: the traditional Flynn and Wall method based on the slope of log(q) versus 1/T, and a modification of this method where the ordinate and abscissa are reversed in the linear regression. The modified method produced a more accurate curve fit of the data, was more sensitive to data nonlinearity, and gave a value of activation energy 75 percent greater than the original method. An uncertainty analysis using the modified method yielded a 60 percent uncertainty in the average activation energy. Based on this result, the activation energy for a carbon-phenolic material was doubled and used to calculate the ablation rate In a typical solid rocket environment. Doubling the activation energy increased surface recession by 3 percent. Current TGA data reduction techniques that use the traditional Flynn and Wall approach to calculate activation energy should be changed to the modified method.
Probabilistic Assessment of Above Zone Pressure Predictions at a Geologic Carbon Storage Site
NASA Astrophysics Data System (ADS)
Namhata, Argha; Oladyshkin, Sergey; Dilmore, Robert M.; Zhang, Liwei; Nakles, David V.
2016-12-01
Carbon dioxide (CO2) storage into geological formations is regarded as an important mitigation strategy for anthropogenic CO2 emissions to the atmosphere. This study first simulates the leakage of CO2 and brine from a storage reservoir through the caprock. Then, we estimate the resulting pressure changes at the zone overlying the caprock also known as Above Zone Monitoring Interval (AZMI). A data-driven approach of arbitrary Polynomial Chaos (aPC) Expansion is then used to quantify the uncertainty in the above zone pressure prediction based on the uncertainties in different geologic parameters. Finally, a global sensitivity analysis is performed with Sobol indices based on the aPC technique to determine the relative importance of different parameters on pressure prediction. The results indicate that there can be uncertainty in pressure prediction locally around the leakage zones. The degree of such uncertainty in prediction depends on the quality of site specific information available for analysis. The scientific results from this study provide substantial insight that there is a need for site-specific data for efficient predictions of risks associated with storage activities. The presented approach can provide a basis of optimized pressure based monitoring network design at carbon storage sites.
International survey for good practices in forecasting uncertainty assessment and communication
NASA Astrophysics Data System (ADS)
Berthet, Lionel; Piotte, Olivier
2014-05-01
Achieving technically sound flood forecasts is a crucial objective for forecasters but remains of poor use if the users do not understand properly their significance and do not use it properly in decision making. One usual way to precise the forecasts limitations is to communicate some information about their uncertainty. Uncertainty assessment and communication to stakeholders are thus important issues for operational flood forecasting services (FFS) but remain open fields for research. French FFS wants to publish graphical streamflow and level forecasts along with uncertainty assessment in near future on its website (available to the greater public). In order to choose the technical options best adapted to its operational context, it carried out a survey among more than 15 fellow institutions. Most of these are providing forecasts and warnings to civil protection officers while some were mostly working for hydroelectricity suppliers. A questionnaire has been prepared in order to standardize the analysis of the practices of the surveyed institutions. The survey was conducted by gathering information from technical reports or from the scientific literature, as well as 'interviews' driven by phone, email discussions or meetings. The questionnaire helped in the exploration of practices in uncertainty assessment, evaluation and communication. Attention was paid to the particular context within which every insitution works, in the analysis drawn from raw results. Results show that most services interviewed assess their forecasts uncertainty. However, practices can differ significantly from a country to another. Popular techniques are ensemble approaches. They allow to take into account several uncertainty sources. Statistical past forecasts analysis (such as the quantile regressions) are also commonly used. Contrary to what was expected, only few services emphasize the role of the forecaster (subjective assessment). Similar contrasts can be observed in uncertainty communication practices. Some countries are quite advanced in uncertainty communication to the general public whereas most of them restrain this communication to pre-defined stakeholders who have previously been sensitized or trained. Differents forms of communication were met during the survey, from written comments to complex graphics. No form could claim a clear leadership. This survey revealed useful to identify some difficulties in the design of the next French forecast uncertainty assessment and communication schemes.
NASA Astrophysics Data System (ADS)
Ha, Taesung
A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.
Signal detection in global mean temperatures after "Paris": an uncertainty and sensitivity analysis
NASA Astrophysics Data System (ADS)
Visser, Hans; Dangendorf, Sönke; van Vuuren, Detlef P.; Bregman, Bram; Petersen, Arthur C.
2018-02-01
In December 2015, 195 countries agreed in Paris to hold the increase in global mean surface temperature (GMST) well below 2.0 °C above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5 °C
. Since large financial flows will be needed to keep GMSTs below these targets, it is important to know how GMST has progressed since pre-industrial times. However, the Paris Agreement is not conclusive as regards methods to calculate it. Should trend progression be deduced from GCM simulations or from instrumental records by (statistical) trend methods? Which simulations or GMST datasets should be chosen, and which trend models? What is pre-industrial
and, finally, are the Paris targets formulated for total warming, originating from both natural and anthropogenic forcing, or do they refer to anthropogenic warming only? To find answers to these questions we performed an uncertainty and sensitivity analysis where datasets and model choices have been varied. For all cases we evaluated trend progression along with uncertainty information. To do so, we analysed four trend approaches and applied these to the five leading observational GMST products. We find GMST progression to be largely independent of various trend model approaches. However, GMST progression is significantly influenced by the choice of GMST datasets. Uncertainties due to natural variability are largest in size. As a parallel path, we calculated GMST progression from an ensemble of 42 GCM simulations. Mean progression derived from GCM-based GMSTs appears to lie in the range of trend-dataset combinations. A difference between both approaches appears to be the width of uncertainty bands: GCM simulations show a much wider spread. Finally, we discuss various choices for pre-industrial baselines and the role of warming definitions. Based on these findings we propose an estimate for signal progression in GMSTs since pre-industrial.
Development and Testing of Neutron Cross Section Covariance Data for SCALE 6.2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, William BJ J; Williams, Mark L; Wiarda, Dorothea
2015-01-01
Neutron cross-section covariance data are essential for many sensitivity/uncertainty and uncertainty quantification assessments performed both within the TSUNAMI suite and more broadly throughout the SCALE code system. The release of ENDF/B-VII.1 included a more complete set of neutron cross-section covariance data: these data form the basis for a new cross-section covariance library to be released in SCALE 6.2. A range of testing is conducted to investigate the properties of these covariance data and ensure that the data are reasonable. These tests include examination of the uncertainty in critical experiment benchmark model k eff values due to nuclear data uncertainties, asmore » well as similarity assessments of irradiated pressurized water reactor (PWR) and boiling water reactor (BWR) fuel with suites of critical experiments. The contents of the new covariance library, the testing performed, and the behavior of the new covariance data are described in this paper. The neutron cross-section covariances can be combined with a sensitivity data file generated using the TSUNAMI suite of codes within SCALE to determine the uncertainty in system k eff caused by nuclear data uncertainties. The Verified, Archived Library of Inputs and Data (VALID) maintained at Oak Ridge National Laboratory (ORNL) contains over 400 critical experiment benchmark models, and sensitivity data are generated for each of these models. The nuclear data uncertainty in k eff is generated for each experiment, and the resulting uncertainties are tabulated and compared to the differences in measured and calculated results. The magnitude of the uncertainty for categories of nuclides (such as actinides, fission products, and structural materials) is calculated for irradiated PWR and BWR fuel to quantify the effect of covariance library changes between the SCALE 6.1 and 6.2 libraries. One of the primary applications of sensitivity/uncertainty methods within SCALE is the assessment of similarities between benchmark experiments and safety applications. This is described by a c k value for each experiment with each application. Several studies have analyzed typical c k values for a range of critical experiments compared with hypothetical irradiated fuel applications. The c k value is sensitive to the cross-section covariance data because the contribution of each nuclide is influenced by its uncertainty; large uncertainties indicate more likely bias sources and are thus given more weight. Changes in c k values resulting from different covariance data can be used to examine and assess underlying data changes. These comparisons are performed for PWR and BWR fuel in storage and transportation systems.« less
Simon, Aaron B.; Dubowitz, David J.; Blockley, Nicholas P.; Buxton, Richard B.
2016-01-01
Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2′ as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2′, we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2′-based estimate of the metabolic response to CO2 of 1.4%, and R2′- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2′-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. PMID:26790354
Simon, Aaron B; Dubowitz, David J; Blockley, Nicholas P; Buxton, Richard B
2016-04-01
Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2' as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2', we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2'-based estimate of the metabolic response to CO2 of 1.4%, and R2'- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2'-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Sandford, Stephen P.
2010-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) is one of four Tier 1 missions recommended by the recent NRC Decadal Survey report on Earth Science and Applications from Space (NRC, 2007). The CLARREO mission addresses the need to provide accurate, broadly acknowledged climate records that are used to enable validated long-term climate projections that become the foundation for informed decisions on mitigation and adaptation policies that address the effects of climate change on society. The CLARREO mission accomplishes this critical objective through rigorous SI traceable decadal change observations that are sensitive to many of the key uncertainties in climate radiative forcings, responses, and feedbacks that in turn drive uncertainty in current climate model projections. These same uncertainties also lead to uncertainty in attribution of climate change to anthropogenic forcing. For the first time CLARREO will make highly accurate, global, SI-traceable decadal change observations sensitive to the most critical, but least understood, climate forcings, responses, and feedbacks. The CLARREO breakthrough is to achieve the required levels of accuracy and traceability to SI standards for a set of observations sensitive to a wide range of key decadal change variables. The required accuracy levels are determined so that climate trend signals can be detected against a background of naturally occurring variability. Climate system natural variability therefore determines what level of accuracy is overkill, and what level is critical to obtain. In this sense, the CLARREO mission requirements are considered optimal from a science value perspective. The accuracy for decadal change traceability to SI standards includes uncertainties associated with instrument calibration, satellite orbit sampling, and analysis methods. Unlike most space missions, the CLARREO requirements are driven not by the instantaneous accuracy of the measurements, but by accuracy in the large time/space scale averages that are key to understanding decadal changes.
NASA Astrophysics Data System (ADS)
Rana, Sachin; Ertekin, Turgay; King, Gregory R.
2018-05-01
Reservoir history matching is frequently viewed as an optimization problem which involves minimizing misfit between simulated and observed data. Many gradient and evolutionary strategy based optimization algorithms have been proposed to solve this problem which typically require a large number of numerical simulations to find feasible solutions. Therefore, a new methodology referred to as GP-VARS is proposed in this study which uses forward and inverse Gaussian processes (GP) based proxy models combined with a novel application of variogram analysis of response surface (VARS) based sensitivity analysis to efficiently solve high dimensional history matching problems. Empirical Bayes approach is proposed to optimally train GP proxy models for any given data. The history matching solutions are found via Bayesian optimization (BO) on forward GP models and via predictions of inverse GP model in an iterative manner. An uncertainty quantification method using MCMC sampling in conjunction with GP model is also presented to obtain a probabilistic estimate of reservoir properties and estimated ultimate recovery (EUR). An application of the proposed GP-VARS methodology on PUNQ-S3 reservoir is presented in which it is shown that GP-VARS provides history match solutions in approximately four times less numerical simulations as compared to the differential evolution (DE) algorithm. Furthermore, a comparison of uncertainty quantification results obtained by GP-VARS, EnKF and other previously published methods shows that the P50 estimate of oil EUR obtained by GP-VARS is in close agreement to the true values for the PUNQ-S3 reservoir.
An early warning system for high climate sensitivity? (Invited)
NASA Astrophysics Data System (ADS)
Pierrehumbert, R.
2010-12-01
The scientific case for the clear and present danger of global warming has been unassailable at least since the release of the Charney Report more than thirty years ago, if not longer. While prompt action to begin decarbonizing energy systems could still head off much of the potential warming, it is distinctly possible that emissions will continue unabated in the coming decades, leading to a doubling or more of pre-industrial carbon dioxide concentrations. At present, we are in the unenviable position of not even knowing how bad things will get if this scenario comes to pass, because of the uncertainty in climate sensitivity. If climate sensitivity is high, then the consequences will be dire, perhaps even catastrophic. As the world continues to warm in response to continued carbon dioxide emissions, will we at least be able to monitor the climate and provide an early warning that the planet is on a high-sensitivity track, if such turns out to be the case? At what point will we actually know the climate sensitivity? It has long been recognized that the prime contributor to uncertainty in climate sensitivity is uncertainty in cloud feedbacks. Study of paleoclimate and climate of the past century has not been able to resolve which models do cloud feedback most correctly, because of uncertainties in radiative forcing. In this talk, I will discuss monitoring requirements, and analysis techniques, that might have the potential to determine which climate models most faithfully represent climate feedbacks, and thus determine which models provide the best estimate of climate sensitivity. The endeavor is complicated by the distinction between transient climate response and equilibrium climate sensitivity. I will discuss the particular challenges posed by this issue, particularly in light of recent indications that the pattern of ocean heat storage may lead to different cloud feedbacks in the transient warming stage than apply once the system has reached equilibrium. Apart from this problem, the transient nature of climate response driven by increasing CO2 requires careful monitoring of ocean heat storage as well as top-of-atmosphere radiative budgets, if climate sensitivity is to be estimated. Water vapor feedback is not considered as uncertain as cloud feedback, but there is still a considerable potential for surprises. I will discuss microwave monitoring requirements for tracking water vapor feedback. At the other extreme, the longer term feedbacks that contribute to Earth System Sensitivity are even more uncertain than cloud feedbacks, particularly with regard to the terrestrial carbon cycle. Prospects for obtaining an early warning of a PETM-type organic carbon release seem bleak. Finally, I will discuss the particular challenge of obtaining an early warning of high climate sensitivity in the case that the climate system has a bifurcation.
Lu, Xin; Soto, Marcelo A; Thévenaz, Luc
2017-07-10
A method based on coherent Rayleigh scattering distinctly evaluating temperature and strain is proposed and experimentally demonstrated for distributed optical fiber sensing. Combining conventional phase-sensitive optical time-domain domain reflectometry (ϕOTDR) and ϕOTDR-based birefringence measurements, independent distributed temperature and strain profiles are obtained along a polarization-maintaining fiber. A theoretical analysis, supported by experimental data, indicates that the proposed system for temperature-strain discrimination is intrinsically better conditioned than an equivalent existing approach that combines classical Brillouin sensing with Brillouin dynamic gratings. This is due to the higher sensitivity of coherent Rayleigh scatting compared to Brillouin scattering, thus offering better performance and lower temperature-strain uncertainties in the discrimination. Compared to the Brillouin-based approach, the ϕOTDR-based system here proposed requires access to only one fiber-end, and a much simpler experimental layout. Experimental results validate the full discrimination of temperature and strain along a 100 m-long elliptical-core polarization-maintaining fiber with measurement uncertainties of ~40 mK and ~0.5 με, respectively. These values agree very well with the theoretically expected measurand resolutions.
An approach for conducting PM source apportionment will be developed, tested, and applied that directly addresses limitations in current SA methods, in particular variability, biases, and intensive resource requirements. Uncertainties in SA results and sensitivities to SA inpu...
Quantification of source uncertainties in Seismic Probabilistic Tsunami Hazard Analysis (SPTHA)
NASA Astrophysics Data System (ADS)
Selva, J.; Tonini, R.; Molinari, I.; Tiberti, M. M.; Romano, F.; Grezio, A.; Melini, D.; Piatanesi, A.; Basili, R.; Lorito, S.
2016-06-01
We propose a procedure for uncertainty quantification in Probabilistic Tsunami Hazard Analysis (PTHA), with a special emphasis on the uncertainty related to statistical modelling of the earthquake source in Seismic PTHA (SPTHA), and on the separate treatment of subduction and crustal earthquakes (treated as background seismicity). An event tree approach and ensemble modelling are used in spite of more classical approaches, such as the hazard integral and the logic tree. This procedure consists of four steps: (1) exploration of aleatory uncertainty through an event tree, with alternative implementations for exploring epistemic uncertainty; (2) numerical computation of tsunami generation and propagation up to a given offshore isobath; (3) (optional) site-specific quantification of inundation; (4) simultaneous quantification of aleatory and epistemic uncertainty through ensemble modelling. The proposed procedure is general and independent of the kind of tsunami source considered; however, we implement step 1, the event tree, specifically for SPTHA, focusing on seismic source uncertainty. To exemplify the procedure, we develop a case study considering seismic sources in the Ionian Sea (central-eastern Mediterranean Sea), using the coasts of Southern Italy as a target zone. The results show that an efficient and complete quantification of all the uncertainties is feasible even when treating a large number of potential sources and a large set of alternative model formulations. We also find that (i) treating separately subduction and background (crustal) earthquakes allows for optimal use of available information and for avoiding significant biases; (ii) both subduction interface and crustal faults contribute to the SPTHA, with different proportions that depend on source-target position and tsunami intensity; (iii) the proposed framework allows sensitivity and deaggregation analyses, demonstrating the applicability of the method for operational assessments.
Chen, Mingshi; Senay, Gabriel B.; Singh, Ramesh K.; Verdin, James P.
2016-01-01
Evapotranspiration (ET) is an important component of the water cycle – ET from the land surface returns approximately 60% of the global precipitation back to the atmosphere. ET also plays an important role in energy transport among the biosphere, atmosphere, and hydrosphere. Current regional to global and daily to annual ET estimation relies mainly on surface energy balance (SEB) ET models or statistical and empirical methods driven by remote sensing data and various climatological databases. These models have uncertainties due to inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at the AmeriFlux tower sites provide an opportunity to assess the ET modeling uncertainties. In this study, we focused on uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model for ET estimation at multiple AmeriFlux tower sites with diverse land cover characteristics and climatic conditions. The 8-day composite 1-km MODerate resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) was used as input land surface temperature for the SSEBop algorithms. The other input data were taken from the AmeriFlux database. Results of statistical analysis indicated that the SSEBop model performed well in estimating ET with an R2 of 0.86 between estimated ET and eddy covariance measurements at 42 AmeriFlux tower sites during 2001–2007. It was encouraging to see that the best performance was observed for croplands, where R2 was 0.92 with a root mean square error of 13 mm/month. The uncertainties or random errors from input variables and parameters of the SSEBop model led to monthly ET estimates with relative errors less than 20% across multiple flux tower sites distributed across different biomes. This uncertainty of the SSEBop model lies within the error range of other SEB models, suggesting systematic error or bias of the SSEBop model is within the normal range. This finding implies that the simplified parameterization of the SSEBop model did not significantly affect the accuracy of the ET estimate while increasing the ease of model setup for operational applications. The sensitivity analysis indicated that the SSEBop model is most sensitive to input variables, land surface temperature (LST) and reference ET (ETo); and parameters, differential temperature (dT), and maximum ET scalar (Kmax), particularly during the non-growing season and in dry areas. In summary, the uncertainty assessment verifies that the SSEBop model is a reliable and robust method for large-area ET estimation. The SSEBop model estimates can be further improved by reducing errors in two input variables (ETo and LST) and two key parameters (Kmax and dT).
Jiang, Yueyang; Zhuang, Qianlai; Schaphoff, Sibyll; Sitch, Stephen; Sokolov, Andrei; Kicklighter, David; Melillo, Jerry
2012-03-01
This study aims to assess how high-latitude vegetation may respond under various climate scenarios during the 21st century with a focus on analyzing model parameters induced uncertainty and how this uncertainty compares to the uncertainty induced by various climates. The analysis was based on a set of 10,000 Monte Carlo ensemble Lund-Potsdam-Jena (LPJ) simulations for the northern high latitudes (45(o)N and polewards) for the period 1900-2100. The LPJ Dynamic Global Vegetation Model (LPJ-DGVM) was run under contemporary and future climates from four Special Report Emission Scenarios (SRES), A1FI, A2, B1, and B2, based on the Hadley Centre General Circulation Model (GCM), and six climate scenarios, X901M, X902L, X903H, X904M, X905L, and X906H from the Integrated Global System Model (IGSM) at the Massachusetts Institute of Technology (MIT). In the current dynamic vegetation model, some parameters are more important than others in determining the vegetation distribution. Parameters that control plant carbon uptake and light-use efficiency have the predominant influence on the vegetation distribution of both woody and herbaceous plant functional types. The relative importance of different parameters varies temporally and spatially and is influenced by climate inputs. In addition to climate, these parameters play an important role in determining the vegetation distribution in the region. The parameter-based uncertainties contribute most to the total uncertainty. The current warming conditions lead to a complexity of vegetation responses in the region. Temperate trees will be more sensitive to climate variability, compared with boreal forest trees and C3 perennial grasses. This sensitivity would result in a unanimous northward greenness migration due to anomalous warming in the northern high latitudes. Temporally, boreal needleleaved evergreen plants are projected to decline considerably, and a large portion of C3 perennial grass is projected to disappear by the end of the 21st century. In contrast, the area of temperate trees would increase, especially under the most extreme A1FI scenario. As the warming continues, the northward greenness expansion in the Arctic region could continue.
Simulation-based optimization framework for reuse of agricultural drainage water in irrigation.
Allam, A; Tawfik, A; Yoshimura, C; Fleifle, A
2016-05-01
A simulation-based optimization framework for agricultural drainage water (ADW) reuse has been developed through the integration of a water quality model (QUAL2Kw) and a genetic algorithm. This framework was applied to the Gharbia drain in the Nile Delta, Egypt, in summer and winter 2012. First, the water quantity and quality of the drain was simulated using the QUAL2Kw model. Second, uncertainty analysis and sensitivity analysis based on Monte Carlo simulation were performed to assess QUAL2Kw's performance and to identify the most critical variables for determination of water quality, respectively. Finally, a genetic algorithm was applied to maximize the total reuse quantity from seven reuse locations with the condition not to violate the standards for using mixed water in irrigation. The water quality simulations showed that organic matter concentrations are critical management variables in the Gharbia drain. The uncertainty analysis showed the reliability of QUAL2Kw to simulate water quality and quantity along the drain. Furthermore, the sensitivity analysis showed that the 5-day biochemical oxygen demand, chemical oxygen demand, total dissolved solids, total nitrogen and total phosphorous are highly sensitive to point source flow and quality. Additionally, the optimization results revealed that the reuse quantities of ADW can reach 36.3% and 40.4% of the available ADW in the drain during summer and winter, respectively. These quantities meet 30.8% and 29.1% of the drainage basin requirements for fresh irrigation water in the respective seasons. Copyright © 2016 Elsevier Ltd. All rights reserved.
Chan, B
2015-01-01
Background Functional improvements have been seen in stroke patients who have received an increased intensity of physiotherapy. This requires additional costs in the form of increased physiotherapist time. Objectives The objective of this economic analysis is to determine the cost-effectiveness of increasing the intensity of physiotherapy (duration and/or frequency) during inpatient rehabilitation after stroke, from the perspective of the Ontario Ministry of Health and Long-term Care. Data Sources The inputs for our economic evaluation were extracted from articles published in peer-reviewed journals and from reports from government sources or the Canadian Stroke Network. Where published data were not available, we sought expert opinion and used inputs based on the experts' estimates. Review Methods The primary outcome we considered was cost per quality-adjusted life-year (QALY). We also evaluated functional strength training because of its similarities to physiotherapy. We used a 2-state Markov model to evaluate the cost-effectiveness of functional strength training and increased physiotherapy intensity for stroke inpatient rehabilitation. The model had a lifetime timeframe with a 5% annual discount rate. We then used sensitivity analyses to evaluate uncertainty in the model inputs. Results We found that functional strength training and higher-intensity physiotherapy resulted in lower costs and improved outcomes over a lifetime. However, our sensitivity analyses revealed high levels of uncertainty in the model inputs, and therefore in the results. Limitations There is a high level of uncertainty in this analysis due to the uncertainty in model inputs, with some of the major inputs based on expert panel consensus or expert opinion. In addition, the utility outcomes were based on a clinical study conducted in the United Kingdom (i.e., 1 study only, and not in an Ontario or Canadian setting). Conclusions Functional strength training and higher-intensity physiotherapy may result in lower costs and improved health outcomes. However, these results should be interpreted with caution. PMID:26366241
NASA Astrophysics Data System (ADS)
Khan, Tanvir R.; Perlinger, Judith A.
2017-10-01
Despite considerable effort to develop mechanistic dry particle deposition parameterizations for atmospheric transport models, current knowledge has been inadequate to propose quantitative measures of the relative performance of available parameterizations. In this study, we evaluated the performance of five dry particle deposition parameterizations developed by Zhang et al. (2001) (Z01), Petroff and Zhang (2010) (PZ10), Kouznetsov and Sofiev (2012) (KS12), Zhang and He (2014) (ZH14), and Zhang and Shao (2014) (ZS14), respectively. The evaluation was performed in three dimensions: model ability to reproduce observed deposition velocities, Vd (accuracy); the influence of imprecision in input parameter values on the modeled Vd (uncertainty); and identification of the most influential parameter(s) (sensitivity). The accuracy of the modeled Vd was evaluated using observations obtained from five land use categories (LUCs): grass, coniferous and deciduous forests, natural water, and ice/snow. To ascertain the uncertainty in modeled Vd, and quantify the influence of imprecision in key model input parameters, a Monte Carlo uncertainty analysis was performed. The Sobol' sensitivity analysis was conducted with the objective to determine the parameter ranking from the most to the least influential. Comparing the normalized mean bias factors (indicators of accuracy), we find that the ZH14 parameterization is the most accurate for all LUCs except for coniferous forest, for which it is second most accurate. From Monte Carlo simulations, the estimated mean normalized uncertainties in the modeled Vd obtained for seven particle sizes (ranging from 0.005 to 2.5 µm) for the five LUCs are 17, 12, 13, 16, and 27 % for the Z01, PZ10, KS12, ZH14, and ZS14 parameterizations, respectively. From the Sobol' sensitivity results, we suggest that the parameter rankings vary by particle size and LUC for a given parameterization. Overall, for dp = 0.001 to 1.0 µm, friction velocity was one of the three most influential parameters in all parameterizations. For giant particles (dp = 10 µm), relative humidity was the most influential parameter. Because it is the least complex of the five parameterizations, and it has the greatest accuracy and least uncertainty, we propose that the ZH14 parameterization is currently superior for incorporation into atmospheric transport models.
Damage assessment of composite plate structures with material and measurement uncertainty
NASA Astrophysics Data System (ADS)
Chandrashekhar, M.; Ganguli, Ranjan
2016-06-01
Composite materials are very useful in structural engineering particularly in weight sensitive applications. Two different test models of the same structure made from composite materials can display very different dynamic behavior due to large uncertainties associated with composite material properties. Also, composite structures can suffer from pre-existing imperfections like delaminations, voids or cracks during fabrication. In this paper, we show that modeling and material uncertainties in composite structures can cause considerable problem in damage assessment. A recently developed C0 shear deformable locking free refined composite plate element is employed in the numerical simulations to alleviate modeling uncertainty. A qualitative estimate of the impact of modeling uncertainty on the damage detection problem is made. A robust Fuzzy Logic System (FLS) with sliding window defuzzifier is used for delamination damage detection in composite plate type structures. The FLS is designed using variations in modal frequencies due to randomness in material properties. Probabilistic analysis is performed using Monte Carlo Simulation (MCS) on a composite plate finite element model. It is demonstrated that the FLS shows excellent robustness in delamination detection at very high levels of randomness in input data.
LeBouthillier, Daniel M; Asmundson, Gordon J G
2015-01-01
Several mechanisms have been posited for the anxiolytic effects of exercise, including reductions in anxiety sensitivity through interoceptive exposure. Studies on aerobic exercise lend support to this hypothesis; however, research investigating aerobic exercise in comparison to placebo, the dose-response relationship between aerobic exercise anxiety sensitivity, the efficacy of aerobic exercise on the spectrum of anxiety sensitivity and the effect of aerobic exercise on other related constructs (e.g. intolerance of uncertainty, distress tolerance) is lacking. We explored reductions in anxiety sensitivity and related constructs following a single session of exercise in a community sample using a randomized controlled trial design. Forty-one participants completed 30 min of aerobic exercise or a placebo stretching control. Anxiety sensitivity, intolerance of uncertainty and distress tolerance were measured at baseline, post-intervention and 3-day and 7-day follow-ups. Individuals in the aerobic exercise group, but not the control group, experienced significant reductions with moderate effect sizes in all dimensions of anxiety sensitivity. Intolerance of uncertainty and distress tolerance remained unchanged in both groups. Our trial supports the efficacy of aerobic exercise in uniquely reducing anxiety sensitivity in individuals with varying levels of the trait and highlights the importance of empirically validating the use of aerobic exercise to address specific mental health vulnerabilities. Aerobic exercise may have potential as a temporary substitute for psychotherapy aimed at reducing anxiety-related psychopathology.
Zhang, Z. Fred; White, Signe K.; Bonneville, Alain; ...
2014-12-31
Numerical simulations have been used for estimating CO2 injectivity, CO2 plume extent, pressure distribution, and Area of Review (AoR), and for the design of CO2 injection operations and monitoring network for the FutureGen project. The simulation results are affected by uncertainties associated with numerous input parameters, the conceptual model, initial and boundary conditions, and factors related to injection operations. Furthermore, the uncertainties in the simulation results also vary in space and time. The key need is to identify those uncertainties that critically impact the simulation results and quantify their impacts. We introduce an approach to determine the local sensitivity coefficientmore » (LSC), defined as the response of the output in percent, to rank the importance of model inputs on outputs. The uncertainty of an input with higher sensitivity has larger impacts on the output. The LSC is scalable by the error of an input parameter. The composite sensitivity of an output to a subset of inputs can be calculated by summing the individual LSC values. We propose a local sensitivity coefficient method and applied it to the FutureGen 2.0 Site in Morgan County, Illinois, USA, to investigate the sensitivity of input parameters and initial conditions. The conceptual model for the site consists of 31 layers, each of which has a unique set of input parameters. The sensitivity of 11 parameters for each layer and 7 inputs as initial conditions is then investigated. For CO2 injectivity and plume size, about half of the uncertainty is due to only 4 or 5 of the 348 inputs and 3/4 of the uncertainty is due to about 15 of the inputs. The initial conditions and the properties of the injection layer and its neighbour layers contribute to most of the sensitivity. Overall, the simulation outputs are very sensitive to only a small fraction of the inputs. However, the parameters that are important for controlling CO2 injectivity are not the same as those controlling the plume size. The three most sensitive inputs for injectivity were the horizontal permeability of Mt Simon 11 (the injection layer), the initial fracture-pressure gradient, and the residual aqueous saturation of Mt Simon 11, while those for the plume area were the initial salt concentration, the initial pressure, and the initial fracture-pressure gradient. The advantages of requiring only a single set of simulation results, scalability to the proper parameter errors, and easy calculation of the composite sensitivities make this approach very cost-effective for estimating AoR uncertainty and guiding cost-effective site characterization, injection well design, and monitoring network design for CO2 storage projects.« less
Integrating Solar PV in Utility System Operations: Analytical Framework and Arizona Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Jing; Botterud, Audun; Mills, Andrew
2015-06-01
A systematic framework is proposed to estimate the impact on operating costs due to uncertainty and variability in renewable resources. The framework quantifies the integration costs associated with subhourly variability and uncertainty as well as day-ahead forecasting errors in solar PV (photovoltaics) power. A case study illustrates how changes in system operations may affect these costs for a utility in the southwestern United States (Arizona Public Service Company). We conduct an extensive sensitivity analysis under different assumptions about balancing reserves, system flexibility, fuel prices, and forecasting errors. We find that high solar PV penetrations may lead to operational challenges, particularlymore » during low-load and high solar periods. Increased system flexibility is essential for minimizing integration costs and maintaining reliability. In a set of sensitivity cases where such flexibility is provided, in part, by flexible operations of nuclear power plants, the estimated integration costs vary between $1.0 and $4.4/MWh-PV for a PV penetration level of 17%. The integration costs are primarily due to higher needs for hour-ahead balancing reserves to address the increased sub-hourly variability and uncertainty in the PV resource. (C) 2015 Elsevier Ltd. All rights reserved.« less
NASA Technical Reports Server (NTRS)
Owens, Andrew; De Weck, Olivier L.; Stromgren, Chel; Goodliff, Kandyce; Cirillo, William
2017-01-01
Future crewed missions to Mars present a maintenance logistics challenge that is unprecedented in human spaceflight. Mission endurance – defined as the time between resupply opportunities – will be significantly longer than previous missions, and therefore logistics planning horizons are longer and the impact of uncertainty is magnified. Maintenance logistics forecasting typically assumes that component failure rates are deterministically known and uses them to represent aleatory uncertainty, or uncertainty that is inherent to the process being examined. However, failure rates cannot be directly measured; rather, they are estimated based on similarity to other components or statistical analysis of observed failures. As a result, epistemic uncertainty – that is, uncertainty in knowledge of the process – exists in failure rate estimates that must be accounted for. Analyses that neglect epistemic uncertainty tend to significantly underestimate risk. Epistemic uncertainty can be reduced via operational experience; for example, the International Space Station (ISS) failure rate estimates are refined using a Bayesian update process. However, design changes may re-introduce epistemic uncertainty. Thus, there is a tradeoff between changing a design to reduce failure rates and operating a fixed design to reduce uncertainty. This paper examines the impact of epistemic uncertainty on maintenance logistics requirements for future Mars missions, using data from the ISS Environmental Control and Life Support System (ECLS) as a baseline for a case study. Sensitivity analyses are performed to investigate the impact of variations in failure rate estimates and epistemic uncertainty on spares mass. The results of these analyses and their implications for future system design and mission planning are discussed.
NASA Astrophysics Data System (ADS)
Zheltikov, A. M.
2018-06-01
Energy exchange between a thermodynamic ensemble of heat- and cold-activated cell-membrane ion channels and the surrounding heat reservoir is shown to impose fundamental limitations on the performance of such channels as temperature-controlled gates for thermal cell activation. Analysis of unavoidable thermodynamic internal-energy fluctuations caused by energy exchange between the ion channels and the heat bath suggests that the resulting enthalpy uncertainty is too high for a robust ion-current gating by a single ion channel, implying that large ensembles of ion channels are needed for thermal cell activation. We argue, based on this thermodynamic analysis, that, had thermosensitive cell-membrane ion channels operated individually, rather than as large ensembles, robust thermal cell activation would have been impossible because of thermodynamic fluctuations.
Hill, Shannon B; Faradzhev, Nadir S; Powell, Cedric J
2017-12-01
We discuss the problem of quantifying common sources of statistical uncertainties for analyses of trace levels of surface contamination using X-ray photoelectron spectroscopy. We examine the propagation of error for peak-area measurements using common forms of linear and polynomial background subtraction including the correlation of points used to determine both background and peak areas. This correlation has been neglected in previous analyses, but we show that it contributes significantly to the peak-area uncertainty near the detection limit. We introduce the concept of relative background subtraction variance (RBSV) which quantifies the uncertainty introduced by the method of background determination relative to the uncertainty of the background area itself. The uncertainties of the peak area and atomic concentration and of the detection limit are expressed using the RBSV, which separates the contributions from the acquisition parameters, the background-determination method, and the properties of the measured spectrum. These results are then combined to find acquisition strategies that minimize the total measurement time needed to achieve a desired detection limit or atomic-percentage uncertainty for a particular trace element. Minimization of data-acquisition time is important for samples that are sensitive to x-ray dose and also for laboratories that need to optimize throughput.
van der Burg, Max Post; Tyre, Andrew J
2011-01-01
Wildlife managers often make decisions under considerable uncertainty. In the most extreme case, a complete lack of data leads to uncertainty that is unquantifiable. Information-gap decision theory deals with assessing management decisions under extreme uncertainty, but it is not widely used in wildlife management. So too, robust population management methods were developed to deal with uncertainties in multiple-model parameters. However, the two methods have not, as yet, been used in tandem to assess population management decisions. We provide a novel combination of the robust population management approach for matrix models with the information-gap decision theory framework for making conservation decisions under extreme uncertainty. We applied our model to the problem of nest survival management in an endangered bird species, the Mountain Plover (Charadrius montanus). Our results showed that matrix sensitivities suggest that nest management is unlikely to have a strong effect on population growth rate, confirming previous analyses. However, given the amount of uncertainty about adult and juvenile survival, our analysis suggested that maximizing nest marking effort was a more robust decision to maintain a stable population. Focusing on the twin concepts of opportunity and robustness in an information-gap model provides a useful method of assessing conservation decisions under extreme uncertainty.
Thermodynamics-based Metabolite Sensitivity Analysis in metabolic networks.
Kiparissides, A; Hatzimanikatis, V
2017-01-01
The increasing availability of large metabolomics datasets enhances the need for computational methodologies that can organize the data in a way that can lead to the inference of meaningful relationships. Knowledge of the metabolic state of a cell and how it responds to various stimuli and extracellular conditions can offer significant insight in the regulatory functions and how to manipulate them. Constraint based methods, such as Flux Balance Analysis (FBA) and Thermodynamics-based flux analysis (TFA), are commonly used to estimate the flow of metabolites through genome-wide metabolic networks, making it possible to identify the ranges of flux values that are consistent with the studied physiological and thermodynamic conditions. However, unless key intracellular fluxes and metabolite concentrations are known, constraint-based models lead to underdetermined problem formulations. This lack of information propagates as uncertainty in the estimation of fluxes and basic reaction properties such as the determination of reaction directionalities. Therefore, knowledge of which metabolites, if measured, would contribute the most to reducing this uncertainty can significantly improve our ability to define the internal state of the cell. In the present work we combine constraint based modeling, Design of Experiments (DoE) and Global Sensitivity Analysis (GSA) into the Thermodynamics-based Metabolite Sensitivity Analysis (TMSA) method. TMSA ranks metabolites comprising a metabolic network based on their ability to constrain the gamut of possible solutions to a limited, thermodynamically consistent set of internal states. TMSA is modular and can be applied to a single reaction, a metabolic pathway or an entire metabolic network. This is, to our knowledge, the first attempt to use metabolic modeling in order to provide a significance ranking of metabolites to guide experimental measurements. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
Evaluating uncertainty and parameter sensitivity in environmental models can be a difficult task, even for low-order, single-media constructs driven by a unique set of site-specific data. The challenge of examining ever more complex, integrated, higher-order models is a formidab...
Aspen succession in the Intermountain West: A deterministic model
Dale L. Bartos; Frederick R. Ward; George S. Innis
1983-01-01
A deterministic model of succession in aspen forests was developed using existing data and intuition. The degree of uncertainty, which was determined by allowing the parameter values to vary at random within limits, was larger than desired. This report presents results of an analysis of model sensitivity to changes in parameter values. These results have indicated...
This paper presents an analysis of the CMAQ v4.5 model performance for particulate matter and its chemical components for the simulated year 2001. This is part two is two part series of papers that examines the model performance of CMAQ v4.5.
Sensitivity analysis of tracer transport in variably saturated soils at USDA-ARS OPE3 field site
USDA-ARS?s Scientific Manuscript database
The objective of this study was to assess the effects of uncertainties in hydrologic and geochemical parameters on the results of simulations of the tracer transport in variably saturated soils at the USDA-ARS OPE3 field site. A tracer experiment with a pulse of KCL solution applied to an irrigatio...
Tutorial: Parallel Computing of Simulation Models for Risk Analysis.
Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D
2016-10-01
Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.
Measurement Uncertainty Budget of the PMV Thermal Comfort Equation
NASA Astrophysics Data System (ADS)
Ekici, Can
2016-05-01
Fanger's predicted mean vote (PMV) equation is the result of the combined quantitative effects of the air temperature, mean radiant temperature, air velocity, humidity activity level and clothing thermal resistance. PMV is a mathematical model of thermal comfort which was developed by Fanger. The uncertainty budget of the PMV equation was developed according to GUM in this study. An example is given for the uncertainty model of PMV in the exemplification section of the study. Sensitivity coefficients were derived from the PMV equation. Uncertainty budgets can be seen in the tables. A mathematical model of the sensitivity coefficients of Ta, hc, T_{mrt}, T_{cl}, and Pa is given in this study. And the uncertainty budgets for hc, T_{cl}, and Pa are given in this study.
Sensitivity tests to define the source apportionment performance criteria in the DeltaSA tool
NASA Astrophysics Data System (ADS)
Pernigotti, Denise; Belis, Claudio A.
2017-04-01
Identification and quantification of the contribution of emission sources to a given area is a key task for the design of abatement strategies. Moreover, European member states are obliged to report this kind of information for zones where the pollution levels exceed the limit values. At present, little is known about the performance and uncertainty of the variety of methodologies used for source apportionment and the comparability between the results of studies using different approaches. The source apportionment Delta (SA Delta) is a tool developed by the EC-JRC to support the particulate matter source apportionment modellers in the identification of sources (for factor analysis studies) and/or in the measure of their performance. The source identification is performed by the tool measuring the proximity of any user chemical profile to preloaded repository data (SPECIATE and SPECIEUROPE). The model performances criteria are based on standard statistical indexes calculated by comparing participants' source contribute estimates and their time series with preloaded references data. Those preloaded data refer to previous European SA intercomparison exercises: the first with real world data (22 participants), the second with synthetic data (25 participants) and the last with real world data which was also extended to Chemical Transport Models (38 receptor models and 4 CTMs). The references used for the model performances are 'true' (predefined by JRC) for the synthetic while they are calculated as ensemble average of the participants' results in real world intercomparisons. The candidates used for each source ensemble reference calculation were selected among participants results based on a number of consistency checks plus the similarity between their chemical profiles to the repository measured data. The estimation of the ensemble reference uncertainty is crucial in order to evaluate the users' performances against it. For this reason a sensitivity analysis on different methods to estimate the ensemble references' uncertainties was performed re-analyzing the synthetic intercomparison dataset, the only one where 'true' reference and ensemble reference contributions were both present. The Delta SA is now available on-line and will be presented, with a critical discussion of the sensitivity analysis on the ensemble reference uncertainty. In particular the grade of among participants mutual agreement on the presence of a certain source should be taken into account. Moreover also the importance of the synthetic intercomparisons in order to catch receptor models common biases will be stressed.
Uncertainty quantification for personalized analyses of human proximal femurs.
Wille, Hagen; Ruess, Martin; Rank, Ernst; Yosibash, Zohar
2016-02-29
Computational models for the personalized analysis of human femurs contain uncertainties in bone material properties and loads, which affect the simulation results. To quantify the influence we developed a probabilistic framework based on polynomial chaos (PC) that propagates stochastic input variables through any computational model. We considered a stochastic E-ρ relationship and a stochastic hip contact force, representing realistic variability of experimental data. Their influence on the prediction of principal strains (ϵ1 and ϵ3) was quantified for one human proximal femur, including sensitivity and reliability analysis. Large variabilities in the principal strain predictions were found in the cortical shell of the femoral neck, with coefficients of variation of ≈40%. Between 60 and 80% of the variance in ϵ1 and ϵ3 are attributable to the uncertainty in the E-ρ relationship, while ≈10% are caused by the load magnitude and 5-30% by the load direction. Principal strain directions were unaffected by material and loading uncertainties. The antero-superior and medial inferior sides of the neck exhibited the largest probabilities for tensile and compression failure, however all were very small (pf<0.001). In summary, uncertainty quantification with PC has been demonstrated to efficiently and accurately describe the influence of very different stochastic inputs, which increases the credibility and explanatory power of personalized analyses of human proximal femurs. Copyright © 2015 Elsevier Ltd. All rights reserved.
Uncertainty Quantification of Water Quality in Tamsui River in Taiwan
NASA Astrophysics Data System (ADS)
Kao, D.; Tsai, C.
2017-12-01
In Taiwan, modeling of non-point source pollution is unavoidably associated with uncertainty. The main purpose of this research is to better understand water contamination in the metropolitan Taipei area, and also to provide a new analysis method for government or companies to establish related control and design measures. In this research, three methods are utilized to carry out the uncertainty analysis step by step with Mike 21, which is widely used for hydro-dynamics and water quality modeling, and the study area is focused on Tamsui river watershed. First, a sensitivity analysis is conducted which can be used to rank the order of influential parameters and variables such as Dissolved Oxygen, Nitrate, Ammonia and Phosphorous. Then we use the First-order error method (FOEA) to determine the number of parameters that could significantly affect the variability of simulation results. Finally, a state-of-the-art method for uncertainty analysis called the Perturbance moment method (PMM) is applied in this research, which is more efficient than the Monte-Carlo simulation (MCS). For MCS, the calculations may become cumbersome when involving multiple uncertain parameters and variables. For PMM, three representative points are used for each random variable, and the statistical moments (e.g., mean value, standard deviation) for the output can be presented by the representative points and perturbance moments based on the parallel axis theorem. With the assumption of the independent parameters and variables, calculation time is significantly reduced for PMM as opposed to MCS for a comparable modeling accuracy.
Sensitivity Analysis of OECD Benchmark Tests in BISON
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.
2015-09-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining coremore » boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.« less
Zhao, Yueyuan; Zhang, Xuefeng; Zhu, Fengcai; Jin, Hui; Wang, Bei
2016-08-02
Objective To estimate the cost-effectiveness of hepatitis E vaccination among pregnant women in epidemic regions. Methods A decision tree model was constructed to evaluate the cost-effectiveness of 3 hepatitis E virus vaccination strategies from societal perspectives. The model parameters were estimated on the basis of published studies and experts' experience. Sensitivity analysis was used to evaluate the uncertainties of the model. Results Vaccination was more economically effective on the basis of the incremental cost-effectiveness ratio (ICER< 3 times China's per capital gross domestic product/quality-adjusted life years); moreover, screening and vaccination had higher QALYs and lower costs compared with universal vaccination. No parameters significantly impacted ICER in one-way sensitivity analysis, and probabilistic sensitivity analysis also showed screening and vaccination to be the dominant strategy. Conclusion Screening and vaccination is the most economical strategy for pregnant women in epidemic regions; however, further studies are necessary to confirm the efficacy and safety of the hepatitis E vaccines.
Quantifying the Uncertainty in Discharge Data Using Hydraulic Knowledge and Uncertain Gaugings
NASA Astrophysics Data System (ADS)
Renard, B.; Le Coz, J.; Bonnifait, L.; Branger, F.; Le Boursicaud, R.; Horner, I.; Mansanarez, V.; Lang, M.
2014-12-01
River discharge is a crucial variable for Hydrology: as the output variable of most hydrologic models, it is used for sensitivity analyses, model structure identification, parameter estimation, data assimilation, prediction, etc. A major difficulty stems from the fact that river discharge is not measured continuously. Instead, discharge time series used by hydrologists are usually based on simple stage-discharge relations (rating curves) calibrated using a set of direct stage-discharge measurements (gaugings). In this presentation, we present a Bayesian approach to build such hydrometric rating curves, to estimate the associated uncertainty and to propagate this uncertainty to discharge time series. The three main steps of this approach are described: (1) Hydraulic analysis: identification of the hydraulic controls that govern the stage-discharge relation, identification of the rating curve equation and specification of prior distributions for the rating curve parameters; (2) Rating curve estimation: Bayesian inference of the rating curve parameters, accounting for the individual uncertainties of available gaugings, which often differ according to the discharge measurement procedure and the flow conditions; (3) Uncertainty propagation: quantification of the uncertainty in discharge time series, accounting for both the rating curve uncertainties and the uncertainty of recorded stage values. In addition, we also discuss current research activities, including the treatment of non-univocal stage-discharge relationships (e.g. due to hydraulic hysteresis, vegetation growth, sudden change of the geometry of the section, etc.).
Measuring the uncertainty of coupling
NASA Astrophysics Data System (ADS)
Zhao, Xiaojun; Shang, Pengjian
2015-06-01
A new information-theoretic measure, called coupling entropy, is proposed here to detect the causal links in complex systems by taking into account the inner composition alignment of temporal structure. It is a permutation-based asymmetric association measure to infer the uncertainty of coupling between two time series. The coupling entropy is found to be effective in the analysis of Hénon maps, where different noises are added to test its accuracy and sensitivity. The coupling entropy is also applied to analyze the relationship between unemployment rate and CPI change in the U.S., where the CPI change turns out to be the driving variable while the unemployment rate is the responding one.
Physically-based modelling of high magnitude torrent events with uncertainty quantification
NASA Astrophysics Data System (ADS)
Wing-Yuen Chow, Candace; Ramirez, Jorge; Zimmermann, Markus; Keiler, Margreth
2017-04-01
High magnitude torrent events are associated with the rapid propagation of vast quantities of water and available sediment downslope where human settlements may be established. Assessing the vulnerability of built structures to these events is a part of consequence analysis, where hazard intensity is related to the degree of loss sustained. The specific contribution of the presented work describes a procedure simulate these damaging events by applying physically-based modelling and to include uncertainty information about the simulated results. This is a first step in the development of vulnerability curves based on several intensity parameters (i.e. maximum velocity, sediment deposition depth and impact pressure). The investigation process begins with the collection, organization and interpretation of detailed post-event documentation and photograph-based observation data of affected structures in three sites that exemplify the impact of highly destructive mudflows and flood occurrences on settlements in Switzerland. Hazard intensity proxies are then simulated with the physically-based FLO-2D model (O'Brien et al., 1993). Prior to modelling, global sensitivity analysis is conducted to support a better understanding of model behaviour, parameterization and the quantification of uncertainties (Song et al., 2015). The inclusion of information describing the degree of confidence in the simulated results supports the credibility of vulnerability curves developed with the modelled data. First, key parameters are identified and selected based on literature review. Truncated a priori ranges of parameter values were then defined by expert solicitation. Local sensitivity analysis is performed based on manual calibration to provide an understanding of the parameters relevant to the case studies of interest. Finally, automated parameter estimation is performed to comprehensively search for optimal parameter combinations and associated values, which are evaluated using the observed data collected in the first stage of the investigation. O'Brien, J.S., Julien, P.Y., Fullerton, W. T., 1993. Two-dimensional water flood and mudflow simulation. Journal of Hydraulic Engineering 119(2): 244-261. Song, X., Zhang, J., Zhan, C., Xuan, Y., Ye, M., Xu C., 2015. Global sensitivity analysis in hydrological modeling: Review of concepts, methods, theoretical frameworks, Journal of Hydrology 523: 739-757.
Process-based modelling of NH3 exchange with grazed grasslands
NASA Astrophysics Data System (ADS)
Móring, Andrea; Vieno, Massimo; Doherty, Ruth M.; Milford, Celia; Nemitz, Eiko; Twigg, Marsailidh M.; Horváth, László; Sutton, Mark A.
2017-09-01
In this study the GAG model, a process-based ammonia (NH3) emission model for urine patches, was extended and applied for the field scale. The new model (GAG_field) was tested over two modelling periods, for which micrometeorological NH3 flux data were available. Acknowledging uncertainties in the measurements, the model was able to simulate the main features of the observed fluxes. The temporal evolution of the simulated NH3 exchange flux was found to be dominated by NH3 emission from the urine patches, offset by simultaneous NH3 deposition to areas of the field not affected by urine. The simulations show how NH3 fluxes over a grazed field in a given day can be affected by urine patches deposited several days earlier, linked to the interaction of volatilization processes with soil pH dynamics. Sensitivity analysis showed that GAG_field was more sensitive to soil buffering capacity (β), field capacity (θfc) and permanent wilting point (θpwp) than the patch-scale model. The reason for these different sensitivities is dual. Firstly, the difference originates from the different scales. Secondly, the difference can be explained by the different initial soil pH and physical properties, which determine the maximum volume of urine that can be stored in the NH3 source layer. It was found that in the case of urine patches with a higher initial soil pH and higher initial soil water content, the sensitivity of NH3 exchange to β was stronger. Also, in the case of a higher initial soil water content, NH3 exchange was more sensitive to the changes in θfc and θpwp. The sensitivity analysis showed that the nitrogen content of urine (cN) is associated with high uncertainty in the simulated fluxes. However, model experiments based on cN values randomized from an estimated statistical distribution indicated that this uncertainty is considerably smaller in practice. Finally, GAG_field was tested with a constant soil pH of 7.5. The variation of NH3 fluxes simulated in this way showed a good agreement with those from the simulations with the original approach, accounting for a dynamically changing soil pH. These results suggest a way for model simplification when GAG_field is applied later at regional scale.
MODFLOW 2000 Head Uncertainty, a First-Order Second Moment Method
Glasgow, H.S.; Fortney, M.D.; Lee, J.; Graettinger, A.J.; Reeves, H.W.
2003-01-01
A computationally efficient method to estimate the variance and covariance in piezometric head results computed through MODFLOW 2000 using a first-order second moment (FOSM) approach is presented. This methodology employs a first-order Taylor series expansion to combine model sensitivity with uncertainty in geologic data. MODFLOW 2000 is used to calculate both the ground water head and the sensitivity of head to changes in input data. From a limited number of samples, geologic data are extrapolated and their associated uncertainties are computed through a conditional probability calculation. Combining the spatially related sensitivity and input uncertainty produces the variance-covariance matrix, the diagonal of which is used to yield the standard deviation in MODFLOW 2000 head. The variance in piezometric head can be used for calibrating the model, estimating confidence intervals, directing exploration, and evaluating the reliability of a design. A case study illustrates the approach, where aquifer transmissivity is the spatially related uncertain geologic input data. The FOSM methodology is shown to be applicable for calculating output uncertainty for (1) spatially related input and output data, and (2) multiple input parameters (transmissivity and recharge).
Addressing uncertainty in vulnerability assessments [Chapter 5
Linda Joyce; Molly Cross; Evan Girvatz
2011-01-01
This chapter addresses issues and approaches for dealing with uncertainty specifically within the context of conducting climate change vulnerability assessments (i.e., uncertainties related to identifying and modeling the sensitivities, levels of exposure, and adaptive capacity of the assessment targets).
NASA Astrophysics Data System (ADS)
Plessis, S.; McDougall, D.; Mandt, K.; Greathouse, T.; Luspay-Kuti, A.
2015-11-01
Bimolecular diffusion coefficients are important parameters used by atmospheric models to calculate altitude profiles of minor constituents in an atmosphere. Unfortunately, laboratory measurements of these coefficients were never conducted at temperature conditions relevant to the atmosphere of Titan. Here we conduct a detailed uncertainty analysis of the bimolecular diffusion coefficient parameters as applied to Titan's upper atmosphere to provide a better understanding of the impact of uncertainty for this parameter on models. Because temperature and pressure conditions are much lower than the laboratory conditions in which bimolecular diffusion parameters were measured, we apply a Bayesian framework, a problem-agnostic framework, to determine parameter estimates and associated uncertainties. We solve the Bayesian calibration problem using the open-source QUESO library which also performs a propagation of uncertainties in the calibrated parameters to temperature and pressure conditions observed in Titan's upper atmosphere. Our results show that, after propagating uncertainty through the Massman model, the uncertainty in molecular diffusion is highly correlated to temperature and we observe no noticeable correlation with pressure. We propagate the calibrated molecular diffusion estimate and associated uncertainty to obtain an estimate with uncertainty due to bimolecular diffusion for the methane molar fraction as a function of altitude. Results show that the uncertainty in methane abundance due to molecular diffusion is in general small compared to eddy diffusion and the chemical kinetics description. However, methane abundance is most sensitive to uncertainty in molecular diffusion above 1200 km where the errors are nontrivial and could have important implications for scientific research based on diffusion models in this altitude range.
Uncertainty Analysis of Simulated Hydraulic Fracturing
NASA Astrophysics Data System (ADS)
Chen, M.; Sun, Y.; Fu, P.; Carrigan, C. R.; Lu, Z.
2012-12-01
Artificial hydraulic fracturing is being used widely to stimulate production of oil, natural gas, and geothermal reservoirs with low natural permeability. Optimization of field design and operation is limited by the incomplete characterization of the reservoir, as well as the complexity of hydrological and geomechanical processes that control the fracturing. Thus, there are a variety of uncertainties associated with the pre-existing fracture distribution, rock mechanics, and hydraulic-fracture engineering that require evaluation of their impact on the optimized design. In this study, a multiple-stage scheme was employed to evaluate the uncertainty. We first define the ranges and distributions of 11 input parameters that characterize the natural fracture topology, in situ stress, geomechanical behavior of the rock matrix and joint interfaces, and pumping operation, to cover a wide spectrum of potential conditions expected for a natural reservoir. These parameters were then sampled 1,000 times in an 11-dimensional parameter space constrained by the specified ranges using the Latin-hypercube method. These 1,000 parameter sets were fed into the fracture simulators, and the outputs were used to construct three designed objective functions, i.e. fracture density, opened fracture length and area density. Using PSUADE, three response surfaces (11-dimensional) of the objective functions were developed and global sensitivity was analyzed to identify the most sensitive parameters for the objective functions representing fracture connectivity, which are critical for sweep efficiency of the recovery process. The second-stage high resolution response surfaces were constructed with dimension reduced to the number of the most sensitive parameters. An additional response surface with respect to the objective function of the fractal dimension for fracture distributions was constructed in this stage. Based on these response surfaces, comprehensive uncertainty analyses were conducted among input parameters and objective functions. In addition, reduced-order emulation models resulting from this analysis can be used for optimal control of hydraulic fracturing. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Wallner, Markus; Houben, Georg; Lohe, Christoph; Quinger, Martin; Himmelsbach, Thomas
2017-12-01
The identification of potential recharge areas and estimation of recharge rates to the confined semi-fossil Ohangwena II Aquifer (KOH-2) is crucial for its future sustainable use. The KOH-2 is located within the endorheic transboundary Cuvelai-Etosha-Basin (CEB), shared by Angola and Namibia. The main objective was the development of a strategy to tackle the problem of data scarcity, which is a well-known problem in semi-arid regions. In a first step, conceptual geological cross sections were created to illustrate the possible geological setting of the system. Furthermore, groundwater travel times were estimated by simple hydraulic calculations. A two-dimensional numerical groundwater model was set up to analyze flow patterns and potential recharge zones. The model was optimized against local observations of hydraulic heads and groundwater age. The sensitivity of the model against different boundary conditions and internal structures was tested. Parameter uncertainty and recharge rates were estimated. Results indicate that groundwater recharge to the KOH-2 mainly occurs from the Angolan Highlands in the northeastern part of the CEB. The sensitivity of the groundwater model to different internal structures is relatively small in comparison to changing boundary conditions in the form of influent or effluent streams. Uncertainty analysis underlined previous results, indicating groundwater recharge originating from the Angolan Highlands. The estimated recharge rates are less than 1% of mean yearly precipitation, which are reasonable for semi-arid regions.
First and Higher Order Effects on Zero Order Radiative Transfer Model
NASA Astrophysics Data System (ADS)
Neelam, M.; Mohanty, B.
2014-12-01
Microwave radiative transfer model are valuable tool in understanding the complex land surface interactions. Past literature has largely focused on local sensitivity analysis for factor priotization and ignoring the interactions between the variables and uncertainties around them. Since land surface interactions are largely nonlinear, there always exist uncertainties, heterogeneities and interactions thus it is important to quantify them to draw accurate conclusions. In this effort, we used global sensitivity analysis to address the issues of variable uncertainty, higher order interactions, factor priotization and factor fixing for zero-order radiative transfer (ZRT) model. With the to-be-launched Soil Moisture Active Passive (SMAP) mission of NASA, it is very important to have a complete understanding of ZRT for soil moisture retrieval to direct future research and cal/val field campaigns. This is a first attempt to use GSA technique to quantify first order and higher order effects on brightness temperature from ZRT model. Our analyses reflect conditions observed during the growing agricultural season for corn and soybeans in two different regions in - Iowa, U.S.A and Winnipeg, Canada. We found that for corn fields in Iowa, there exist significant second order interactions between soil moisture, surface roughness parameters (RMS height and correlation length) and vegetation parameters (vegetation water content, structure and scattering albedo), whereas in Winnipeg, second order interactions are mainly due to soil moisture and vegetation parameters. But for soybean fields in both Iowa and Winnipeg, we found significant interactions only to exist between soil moisture and surface roughness parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dowdell, S; Grassberger, C; Paganetti, H
2014-06-01
Purpose: Evaluate the sensitivity of intensity-modulated proton therapy (IMPT) lung treatments to systematic and random setup uncertainties combined with motion effects. Methods: Treatment plans with single-field homogeneity restricted to ±20% (IMPT-20%) were compared to plans with no restriction (IMPT-full). 4D Monte Carlo simulations were performed for 10 lung patients using the patient CT geometry with either ±5mm systematic or random setup uncertainties applied over a 35 × 2.5Gy(RBE) fractionated treatment course. Intra-fraction, inter-field and inter-fraction motions were investigated. 50 fractionated treatments with systematic or random setup uncertainties applied to each fraction were generated for both IMPT delivery methods and threemore » energy-dependent spot sizes (big spots - BS σ=18-9mm, intermediate spots - IS σ=11-5mm, small spots - SS σ=4-2mm). These results were compared to a Monte Carlo recalculation of the original treatment plan, with results presented as the difference in EUD (ΔEUD), V{sub 95} (ΔV{sub 95}) and target homogeneity (ΔD{sub 1}–D{sub 99}) between the 4D simulations and the Monte Carlo calculation on the planning CT. Results: The standard deviations in the ΔEUD were 1.95±0.47(BS), 1.85±0.66(IS) and 1.31±0.35(SS) times higher in IMPT-full compared to IMPT-20% when ±5mm systematic setup uncertainties were applied. The ΔV{sub 95} variations were also 1.53±0.26(BS), 1.60±0.50(IS) and 1.38±0.38(SS) times higher for IMPT-full. For random setup uncertainties, the standard deviations of the ΔEUD from 50 simulated fractionated treatments were 1.94±0.90(BS), 2.13±1.08(IS) and 1.45±0.57(SS) times higher in IMPTfull compared to IMPT-20%. For all spot sizes considered, the ΔD{sub 1}-D{sub 99} coincided within the uncertainty limits for the two IMPT delivery methods, with the mean value always higher for IMPT-full. Statistical analysis showed significant differences between the IMPT-full and IMPT-20% dose distributions for the majority of scenarios studied. Conclusion: Lung IMPT-full treatments are more sensitive to both systematic and random setup uncertainties compared to IMPT-20%. This work was supported by the NIH R01 CA111590.« less
Nuclear Data Needs for Generation IV Nuclear Energy Systems
NASA Astrophysics Data System (ADS)
Rullhusen, Peter
2006-04-01
Nuclear data needs for generation IV systems. Future of nuclear energy and the role of nuclear data / P. Finck. Nuclear data needs for generation IV nuclear energy systems-summary of U.S. workshop / T. A. Taiwo, H. S. Khalil. Nuclear data needs for the assessment of gen. IV systems / G. Rimpault. Nuclear data needs for generation IV-lessons from benchmarks / S. C. van der Marck, A. Hogenbirk, M. C. Duijvestijn. Core design issues of the supercritical water fast reactor / M. Mori ... [et al.]. GFR core neutronics studies at CEA / J. C. Bosq ... [et al]. Comparative study on different phonon frequency spectra of graphite in GCR / Young-Sik Cho ... [et al.]. Innovative fuel types for minor actinides transmutation / D. Haas, A. Fernandez, J. Somers. The importance of nuclear data in modeling and designing generation IV fast reactors / K. D. Weaver. The GIF and Mexico-"everything is possible" / C. Arrenondo Sánchez -- Benmarks, sensitivity calculations, uncertainties. Sensitivity of advanced reactor and fuel cycle performance parameters to nuclear data uncertainties / G. Aliberti ... [et al.]. Sensitivity and uncertainty study for thermal molten salt reactors / A. Biduad ... [et al.]. Integral reactor physics benchmarks- The International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPHEP) / J. B. Briggs, D. W. Nigg, E. Sartori. Computer model of an error propagation through micro-campaign of fast neutron gas cooled nuclear reactor / E. Ivanov. Combining differential and integral experiments on [symbol] for reducing uncertainties in nuclear data applications / T. Kawano ... [et al.]. Sensitivity of activation cross sections of the Hafnium, Tanatalum and Tungsten stable isotopes to nuclear reaction mechanisms / V. Avrigeanu ... [et al.]. Generating covariance data with nuclear models / A. J. Koning. Sensitivity of Candu-SCWR reactors physics calculations to nuclear data files / K. S. Kozier, G. R. Dyck. The lead cooled fast reactor benchmark BREST-300: analysis with sensitivity method / V. Smirnov ... [et al.]. Sensitivity analysis of neutron cross-sections considered for design and safety studies of LFR and SFR generation IV systems / K. Tucek, J. Carlsson, H. Wider -- Experiments. INL capabilities for nuclear data measurements using the Argonne intense pulsed neutron source facility / J. D. Cole ... [et al.]. Cross-section measurements in the fast neutron energy range / A. Plompen. Recent measurements of neutron capture cross sections for minor actinides by a JNC and Kyoto University Group / H. Harada ... [et al.]. Determination of minor actinides fission cross sections by means of transfer reactions / M. Aiche ... [et al.] -- Evaluated data libraries. Nuclear data services from the NEA / H. Henriksson, Y. Rugama. Nuclear databases for energy applications: an IAEA perspective / R. Capote Noy, A. L. Nichols, A. Trkov. Nuclear data evaluation for generation IV / G. Noguère ... [et al.]. Improved evaluations of neutron-induced reactions on americium isotopes / P. Talou ... [et al.]. Using improved ENDF-based nuclear data for candu reactor calculations / J. Prodea. A comparative study on the graphite-moderated reactors using different evaluated nuclear data / Do Heon Kim ... [et al.].
Herring, William; Pearson, Isobel; Purser, Molly; Nakhaipour, Hamid Reza; Haiderali, Amin; Wolowacz, Sorrel; Jayasundara, Kavisha
2016-01-01
Our objective was to estimate the cost effectiveness of ofatumumab plus chlorambucil (OChl) versus chlorambucil in patients with chronic lymphocytic leukaemia for whom fludarabine-based therapies are considered inappropriate from the perspective of the publicly funded healthcare system in Canada. A semi-Markov model (3-month cycle length) used survival curves to govern progression-free survival (PFS) and overall survival (OS). Efficacy and safety data and health-state utility values were estimated from the COMPLEMENT-1 trial. Post-progression treatment patterns were based on clinical guidelines, Canadian treatment practices and published literature. Total and incremental expected lifetime costs (in Canadian dollars [$Can], year 2013 values), life-years and quality-adjusted life-years (QALYs) were computed. Uncertainty was assessed via deterministic and probabilistic sensitivity analyses. The discounted lifetime health and economic outcomes estimated by the model showed that, compared with chlorambucil, first-line treatment with OChl led to an increase in QALYs (0.41) and total costs ($Can27,866) and to an incremental cost-effectiveness ratio (ICER) of $Can68,647 per QALY gained. In deterministic sensitivity analyses, the ICER was most sensitive to the modelling time horizon and to the extrapolation of OS treatment effects beyond the trial duration. In probabilistic sensitivity analysis, the probability of cost effectiveness at a willingness-to-pay threshold of $Can100,000 per QALY gained was 59 %. Base-case results indicated that improved overall response and PFS for OChl compared with chlorambucil translated to improved quality-adjusted life expectancy. Sensitivity analysis suggested that OChl is likely to be cost effective subject to uncertainty associated with the presence of any long-term OS benefit and the model time horizon.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin
2016-04-01
Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
NASA Astrophysics Data System (ADS)
Hughes, J. D.; White, J.; Doherty, J.
2011-12-01
Linear prediction uncertainty analysis in a Bayesian framework was applied to guide the conditioning of an integrated surface water/groundwater model that will be used to predict the effects of groundwater withdrawals on surface-water and groundwater flows. Linear prediction uncertainty analysis is an effective approach for identifying (1) raw and processed data most effective for model conditioning prior to inversion, (2) specific observations and periods of time critically sensitive to specific predictions, and (3) additional observation data that would reduce model uncertainty relative to specific predictions. We present results for a two-dimensional groundwater model of a 2,186 km2 area of the Biscayne aquifer in south Florida implicitly coupled to a surface-water routing model of the actively managed canal system. The model domain includes 5 municipal well fields withdrawing more than 1 Mm3/day and 17 operable surface-water control structures that control freshwater releases from the Everglades and freshwater discharges to Biscayne Bay. More than 10 years of daily observation data from 35 groundwater wells and 24 surface water gages are available to condition model parameters. A dense parameterization was used to fully characterize the contribution of the inversion null space to predictive uncertainty and included bias-correction parameters. This approach allows better resolution of the boundary between the inversion null space and solution space. Bias-correction parameters (e.g., rainfall, potential evapotranspiration, and structure flow multipliers) absorb information that is present in structural noise that may otherwise contaminate the estimation of more physically-based model parameters. This allows greater precision in predictions that are entirely solution-space dependent, and reduces the propensity for bias in predictions that are not. Results show that application of this analysis is an effective means of identifying those surface-water and groundwater data, both raw and processed, that minimize predictive uncertainty, while simultaneously identifying the maximum solution-space dimensionality of the inverse problem supported by the data.