Sample records for model-based uncertainty quantification

  1. Development of a Framework for Model-Based Analysis, Uncertainty Quantification, and Robust Control Design of Nonlinear Smart Composite Systems

    DTIC Science & Technology

    2015-06-04

    control, vibration and noise control, health monitoring, and energy harvesting . However, these advantages come at the cost of rate-dependent hysteresis...configuration used for energy harvesting . Uncertainty Quantification Uncertainty quantification is pursued in two steps: (i) determination of densities...Crews and R.C. Smith, “Quantification of parameter and model uncertainty for shape mem- ory alloy bending actuators,” Journal of Intelligent material

  2. Quantification of uncertainties for application in detonation simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Miao; Ma, Zhibo

    2016-06-01

    Numerical simulation has become an important means in designing detonation systems, and the quantification of its uncertainty is also necessary to reliability certification. As to quantifying the uncertainty, it is the most important to analyze how the uncertainties occur and develop, and how the simulations develop from benchmark models to new models. Based on the practical needs of engineering and the technology of verification & validation, a framework of QU(quantification of uncertainty) is brought forward in the case that simulation is used on detonation system for scientific prediction. An example is offered to describe the general idea of quantification of simulation uncertainties.

  3. Ensemble-based uncertainty quantification for coordination and control of thermostatically controlled loads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan; Lian, Jianming; Engel, Dave

    2017-07-27

    This paper presents a general uncertainty quantification (UQ) framework that provides a systematic analysis of the uncertainty involved in the modeling of a control system, and helps to improve the performance of a control strategy.

  4. A Bayes network approach to uncertainty quantification in hierarchically developed computational models

    DOE PAGES

    Urbina, Angel; Mahadevan, Sankaran; Paez, Thomas L.

    2012-03-01

    Here, performance assessment of complex systems is ideally accomplished through system-level testing, but because they are expensive, such tests are seldom performed. On the other hand, for economic reasons, data from tests on individual components that are parts of complex systems are more readily available. The lack of system-level data leads to a need to build computational models of systems and use them for performance prediction in lieu of experiments. Because their complexity, models are sometimes built in a hierarchical manner, starting with simple components, progressing to collections of components, and finally, to the full system. Quantification of uncertainty inmore » the predicted response of a system model is required in order to establish confidence in the representation of actual system behavior. This paper proposes a framework for the complex, but very practical problem of quantification of uncertainty in system-level model predictions. It is based on Bayes networks and uses the available data at multiple levels of complexity (i.e., components, subsystem, etc.). Because epistemic sources of uncertainty were shown to be secondary, in this application, aleatoric only uncertainty is included in the present uncertainty quantification. An example showing application of the techniques to uncertainty quantification of measures of response of a real, complex aerospace system is included.« less

  5. An Uncertainty Quantification Framework for Prognostics and Condition-Based Monitoring

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Goebel, Kai

    2014-01-01

    This paper presents a computational framework for uncertainty quantification in prognostics in the context of condition-based monitoring of aerospace systems. The different sources of uncertainty and the various uncertainty quantification activities in condition-based prognostics are outlined in detail, and it is demonstrated that the Bayesian subjective approach is suitable for interpreting uncertainty in online monitoring. A state-space model-based framework for prognostics, that can rigorously account for the various sources of uncertainty, is presented. Prognostics consists of two important steps. First, the state of the system is estimated using Bayesian tracking, and then, the future states of the system are predicted until failure, thereby computing the remaining useful life of the system. The proposed framework is illustrated using the power system of a planetary rover test-bed, which is being developed and studied at NASA Ames Research Center.

  6. Uncertainty Quantification for Robust Control of Wind Turbines using Sliding Mode Observer

    NASA Astrophysics Data System (ADS)

    Schulte, Horst

    2016-09-01

    A new quantification method of uncertain models for robust wind turbine control using sliding-mode techniques is presented with the objective to improve active load mitigation. This approach is based on the so-called equivalent output injection signal, which corresponds to the average behavior of the discontinuous switching term, establishing and maintaining a motion on a so-called sliding surface. The injection signal is directly evaluated to obtain estimates of the uncertainty bounds of external disturbances and parameter uncertainties. The applicability of the proposed method is illustrated by the quantification of a four degree-of-freedom model of the NREL 5MW reference turbine containing uncertainties.

  7. A framework for optimization and quantification of uncertainty and sensitivity for developing carbon capture systems

    DOE PAGES

    Eslick, John C.; Ng, Brenda; Gao, Qianwen; ...

    2014-12-31

    Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less

  8. Rational selection of experimental readout and intervention sites for reducing uncertainties in computational model predictions.

    PubMed

    Flassig, Robert J; Migal, Iryna; der Zalm, Esther van; Rihko-Struckmann, Liisa; Sundmacher, Kai

    2015-01-16

    Understanding the dynamics of biological processes can substantially be supported by computational models in the form of nonlinear ordinary differential equations (ODE). Typically, this model class contains many unknown parameters, which are estimated from inadequate and noisy data. Depending on the ODE structure, predictions based on unmeasured states and associated parameters are highly uncertain, even undetermined. For given data, profile likelihood analysis has been proven to be one of the most practically relevant approaches for analyzing the identifiability of an ODE structure, and thus model predictions. In case of highly uncertain or non-identifiable parameters, rational experimental design based on various approaches has shown to significantly reduce parameter uncertainties with minimal amount of effort. In this work we illustrate how to use profile likelihood samples for quantifying the individual contribution of parameter uncertainty to prediction uncertainty. For the uncertainty quantification we introduce the profile likelihood sensitivity (PLS) index. Additionally, for the case of several uncertain parameters, we introduce the PLS entropy to quantify individual contributions to the overall prediction uncertainty. We show how to use these two criteria as an experimental design objective for selecting new, informative readouts in combination with intervention site identification. The characteristics of the proposed multi-criterion objective are illustrated with an in silico example. We further illustrate how an existing practically non-identifiable model for the chlorophyll fluorescence induction in a photosynthetic organism, D. salina, can be rendered identifiable by additional experiments with new readouts. Having data and profile likelihood samples at hand, the here proposed uncertainty quantification based on prediction samples from the profile likelihood provides a simple way for determining individual contributions of parameter uncertainties to uncertainties in model predictions. The uncertainty quantification of specific model predictions allows identifying regions, where model predictions have to be considered with care. Such uncertain regions can be used for a rational experimental design to render initially highly uncertain model predictions into certainty. Finally, our uncertainty quantification directly accounts for parameter interdependencies and parameter sensitivities of the specific prediction.

  9. UQTools: The Uncertainty Quantification Toolbox - Introduction and Tutorial

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Crespo, Luis G.; Giesy, Daniel P.

    2012-01-01

    UQTools is the short name for the Uncertainty Quantification Toolbox, a software package designed to efficiently quantify the impact of parametric uncertainty on engineering systems. UQTools is a MATLAB-based software package and was designed to be discipline independent, employing very generic representations of the system models and uncertainty. Specifically, UQTools accepts linear and nonlinear system models and permits arbitrary functional dependencies between the system s measures of interest and the probabilistic or non-probabilistic parametric uncertainty. One of the most significant features incorporated into UQTools is the theoretical development centered on homothetic deformations and their application to set bounding and approximating failure probabilities. Beyond the set bounding technique, UQTools provides a wide range of probabilistic and uncertainty-based tools to solve key problems in science and engineering.

  10. Uncertainty quantification metrics for whole product life cycle cost estimates in aerospace innovation

    NASA Astrophysics Data System (ADS)

    Schwabe, O.; Shehab, E.; Erkoyuncu, J.

    2015-08-01

    The lack of defensible methods for quantifying cost estimate uncertainty over the whole product life cycle of aerospace innovations such as propulsion systems or airframes poses a significant challenge to the creation of accurate and defensible cost estimates. Based on the axiomatic definition of uncertainty as the actual prediction error of the cost estimate, this paper provides a comprehensive overview of metrics used for the uncertainty quantification of cost estimates based on a literature review, an evaluation of publicly funded projects such as part of the CORDIS or Horizon 2020 programs, and an analysis of established approaches used by organizations such NASA, the U.S. Department of Defence, the ESA, and various commercial companies. The metrics are categorized based on their foundational character (foundations), their use in practice (state-of-practice), their availability for practice (state-of-art) and those suggested for future exploration (state-of-future). Insights gained were that a variety of uncertainty quantification metrics exist whose suitability depends on the volatility of available relevant information, as defined by technical and cost readiness level, and the number of whole product life cycle phases the estimate is intended to be valid for. Information volatility and number of whole product life cycle phases can hereby be considered as defining multi-dimensional probability fields admitting various uncertainty quantification metric families with identifiable thresholds for transitioning between them. The key research gaps identified were the lacking guidance grounded in theory for the selection of uncertainty quantification metrics and lacking practical alternatives to metrics based on the Central Limit Theorem. An innovative uncertainty quantification framework consisting of; a set-theory based typology, a data library, a classification system, and a corresponding input-output model are put forward to address this research gap as the basis for future work in this field.

  11. Aerosol-type retrieval and uncertainty quantification from OMI data

    NASA Astrophysics Data System (ADS)

    Kauppi, Anu; Kolmonen, Pekka; Laine, Marko; Tamminen, Johanna

    2017-11-01

    We discuss uncertainty quantification for aerosol-type selection in satellite-based atmospheric aerosol retrieval. The retrieval procedure uses precalculated aerosol microphysical models stored in look-up tables (LUTs) and top-of-atmosphere (TOA) spectral reflectance measurements to solve the aerosol characteristics. The forward model approximations cause systematic differences between the modelled and observed reflectance. Acknowledging this model discrepancy as a source of uncertainty allows us to produce more realistic uncertainty estimates and assists the selection of the most appropriate LUTs for each individual retrieval.This paper focuses on the aerosol microphysical model selection and characterisation of uncertainty in the retrieved aerosol type and aerosol optical depth (AOD). The concept of model evidence is used as a tool for model comparison. The method is based on Bayesian inference approach, in which all uncertainties are described as a posterior probability distribution. When there is no single best-matching aerosol microphysical model, we use a statistical technique based on Bayesian model averaging to combine AOD posterior probability densities of the best-fitting models to obtain an averaged AOD estimate. We also determine the shared evidence of the best-matching models of a certain main aerosol type in order to quantify how plausible it is that it represents the underlying atmospheric aerosol conditions.The developed method is applied to Ozone Monitoring Instrument (OMI) measurements using a multiwavelength approach for retrieving the aerosol type and AOD estimate with uncertainty quantification for cloud-free over-land pixels. Several larger pixel set areas were studied in order to investigate the robustness of the developed method. We evaluated the retrieved AOD by comparison with ground-based measurements at example sites. We found that the uncertainty of AOD expressed by posterior probability distribution reflects the difficulty in model selection. The posterior probability distribution can provide a comprehensive characterisation of the uncertainty in this kind of problem for aerosol-type selection. As a result, the proposed method can account for the model error and also include the model selection uncertainty in the total uncertainty budget.

  12. Expert judgement and uncertainty quantification for climate change

    NASA Astrophysics Data System (ADS)

    Oppenheimer, Michael; Little, Christopher M.; Cooke, Roger M.

    2016-05-01

    Expert judgement is an unavoidable element of the process-based numerical models used for climate change projections, and the statistical approaches used to characterize uncertainty across model ensembles. Here, we highlight the need for formalized approaches to unifying numerical modelling with expert judgement in order to facilitate characterization of uncertainty in a reproducible, consistent and transparent fashion. As an example, we use probabilistic inversion, a well-established technique used in many other applications outside of climate change, to fuse two recent analyses of twenty-first century Antarctic ice loss. Probabilistic inversion is but one of many possible approaches to formalizing the role of expert judgement, and the Antarctic ice sheet is only one possible climate-related application. We recommend indicators or signposts that characterize successful science-based uncertainty quantification.

  13. Uncertainty Quantification of Turbulence Model Closure Coefficients for Transonic Wall-Bounded Flows

    NASA Technical Reports Server (NTRS)

    Schaefer, John; West, Thomas; Hosder, Serhat; Rumsey, Christopher; Carlson, Jan-Renee; Kleb, William

    2015-01-01

    The goal of this work was to quantify the uncertainty and sensitivity of commonly used turbulence models in Reynolds-Averaged Navier-Stokes codes due to uncertainty in the values of closure coefficients for transonic, wall-bounded flows and to rank the contribution of each coefficient to uncertainty in various output flow quantities of interest. Specifically, uncertainty quantification of turbulence model closure coefficients was performed for transonic flow over an axisymmetric bump at zero degrees angle of attack and the RAE 2822 transonic airfoil at a lift coefficient of 0.744. Three turbulence models were considered: the Spalart-Allmaras Model, Wilcox (2006) k-w Model, and the Menter Shear-Stress Trans- port Model. The FUN3D code developed by NASA Langley Research Center was used as the flow solver. The uncertainty quantification analysis employed stochastic expansions based on non-intrusive polynomial chaos as an efficient means of uncertainty propagation. Several integrated and point-quantities are considered as uncertain outputs for both CFD problems. All closure coefficients were treated as epistemic uncertain variables represented with intervals. Sobol indices were used to rank the relative contributions of each closure coefficient to the total uncertainty in the output quantities of interest. This study identified a number of closure coefficients for each turbulence model for which more information will reduce the amount of uncertainty in the output significantly for transonic, wall-bounded flows.

  14. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  15. Parameter-induced uncertainty quantification of crop yields, soil N2O and CO2 emission for 8 arable sites across Europe using the LandscapeDNDC model

    NASA Astrophysics Data System (ADS)

    Santabarbara, Ignacio; Haas, Edwin; Kraus, David; Herrera, Saul; Klatt, Steffen; Kiese, Ralf

    2014-05-01

    When using biogeochemical models to estimate greenhouse gas emissions at site to regional/national levels, the assessment and quantification of the uncertainties of simulation results are of significant importance. The uncertainties in simulation results of process-based ecosystem models may result from uncertainties of the process parameters that describe the processes of the model, model structure inadequacy as well as uncertainties in the observations. Data for development and testing of uncertainty analisys were corp yield observations, measurements of soil fluxes of nitrous oxide (N2O) and carbon dioxide (CO2) from 8 arable sites across Europe. Using the process-based biogeochemical model LandscapeDNDC for simulating crop yields, N2O and CO2 emissions, our aim is to assess the simulation uncertainty by setting up a Bayesian framework based on Metropolis-Hastings algorithm. Using Gelman statistics convergence criteria and parallel computing techniques, enable multi Markov Chains to run independently in parallel and create a random walk to estimate the joint model parameter distribution. Through means distribution we limit the parameter space, get probabilities of parameter values and find the complex dependencies among them. With this parameter distribution that determines soil-atmosphere C and N exchange, we are able to obtain the parameter-induced uncertainty of simulation results and compare them with the measurements data.

  16. Uncertainty quantification in Eulerian-Lagrangian models for particle-laden flows

    NASA Astrophysics Data System (ADS)

    Fountoulakis, Vasileios; Jacobs, Gustaaf; Udaykumar, Hs

    2017-11-01

    A common approach to ameliorate the computational burden in simulations of particle-laden flows is to use a point-particle based Eulerian-Lagrangian model, which traces individual particles in their Lagrangian frame and models particles as mathematical points. The particle motion is determined by Stokes drag law, which is empirically corrected for Reynolds number, Mach number and other parameters. The empirical corrections are subject to uncertainty. Treating them as random variables renders the coupled system of PDEs and ODEs stochastic. An approach to quantify the propagation of this parametric uncertainty to the particle solution variables is proposed. The approach is based on averaging of the governing equations and allows for estimation of the first moments of the quantities of interest. We demonstrate the feasibility of our proposed methodology of uncertainty quantification of particle-laden flows on one-dimensional linear and nonlinear Eulerian-Lagrangian systems. This research is supported by AFOSR under Grant FA9550-16-1-0008.

  17. Optimization of monitoring networks based on uncertainty quantification of model predictions of contaminant transport

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Harp, D.

    2010-12-01

    The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.

  18. High-Throughput Thermodynamic Modeling and Uncertainty Quantification for ICME

    NASA Astrophysics Data System (ADS)

    Otis, Richard A.; Liu, Zi-Kui

    2017-05-01

    One foundational component of the integrated computational materials engineering (ICME) and Materials Genome Initiative is the computational thermodynamics based on the calculation of phase diagrams (CALPHAD) method. The CALPHAD method pioneered by Kaufman has enabled the development of thermodynamic, atomic mobility, and molar volume databases of individual phases in the full space of temperature, composition, and sometimes pressure for technologically important multicomponent engineering materials, along with sophisticated computational tools for using the databases. In this article, our recent efforts will be presented in terms of developing new computational tools for high-throughput modeling and uncertainty quantification based on high-throughput, first-principles calculations and the CALPHAD method along with their potential propagations to downstream ICME modeling and simulations.

  19. How to Make Data a Blessing to Parametric Uncertainty Quantification and Reduction?

    NASA Astrophysics Data System (ADS)

    Ye, M.; Shi, X.; Curtis, G. P.; Kohler, M.; Wu, J.

    2013-12-01

    In a Bayesian point of view, probability of model parameters and predictions are conditioned on data used for parameter inference and prediction analysis. It is critical to use appropriate data for quantifying parametric uncertainty and its propagation to model predictions. However, data are always limited and imperfect. When a dataset cannot properly constrain model parameters, it may lead to inaccurate uncertainty quantification. While in this case data appears to be a curse to uncertainty quantification, a comprehensive modeling analysis may help understand the cause and characteristics of parametric uncertainty and thus turns data into a blessing. In this study, we illustrate impacts of data on uncertainty quantification and reduction using an example of surface complexation model (SCM) developed to simulate uranyl (U(VI)) adsorption. The model includes two adsorption sites, referred to as strong and weak sites. The amount of uranium adsorption on these sites determines both the mean arrival time and the long tail of the breakthrough curves. There is one reaction on the weak site but two reactions on the strong site. The unknown parameters include fractions of the total surface site density of the two sites and surface complex formation constants of the three reactions. A total of seven experiments were conducted with different geochemical conditions to estimate these parameters. The experiments with low initial concentration of U(VI) result in a large amount of parametric uncertainty. A modeling analysis shows that it is because the experiments cannot distinguish the relative adsorption affinity of the strong and weak sites on uranium adsorption. Therefore, the experiments with high initial concentration of U(VI) are needed, because in the experiments the strong site is nearly saturated and the weak site can be determined. The experiments with high initial concentration of U(VI) are a blessing to uncertainty quantification, and the experiments with low initial concentration help modelers turn a curse into a blessing. The data impacts on uncertainty quantification and reduction are quantified using probability density functions of model parameters obtained from Markov Chain Monte Carlo simulation using the DREAM algorithm. This study provides insights to model calibration, uncertainty quantification, experiment design, and data collection in groundwater reactive transport modeling and other environmental modeling.

  20. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  1. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  2. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    NASA Astrophysics Data System (ADS)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

    2018-03-01

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  3. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE PAGES

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; ...

    2018-02-09

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  4. Uncertainty assessment of a model for biological nitrogen and phosphorus removal: Application to a large wastewater treatment plant

    NASA Astrophysics Data System (ADS)

    Mannina, Giorgio; Cosenza, Alida; Viviani, Gaspare

    In the last few years, the use of mathematical models in WasteWater Treatment Plant (WWTP) processes has become a common way to predict WWTP behaviour. However, mathematical models generally demand advanced input for their implementation that must be evaluated by an extensive data-gathering campaign, which cannot always be carried out. This fact, together with the intrinsic complexity of the model structure, leads to model results that may be very uncertain. Quantification of the uncertainty is imperative. However, despite the importance of uncertainty quantification, only few studies have been carried out in the wastewater treatment field, and those studies only included a few of the sources of model uncertainty. Seeking the development of the area, the paper presents the uncertainty assessment of a mathematical model simulating biological nitrogen and phosphorus removal. The uncertainty assessment was conducted according to the Generalised Likelihood Uncertainty Estimation (GLUE) methodology that has been scarcely applied in wastewater field. The model was based on activated-sludge models 1 (ASM) and 2 (ASM2). Different approaches can be used for uncertainty analysis. The GLUE methodology requires a large number of Monte Carlo simulations in which a random sampling of individual parameters drawn from probability distributions is used to determine a set of parameter values. Using this approach, model reliability was evaluated based on its capacity to globally limit the uncertainty. The method was applied to a large full-scale WWTP for which quantity and quality data was gathered. The analysis enabled to gain useful insights for WWTP modelling identifying the crucial aspects where higher uncertainty rely and where therefore, more efforts should be provided in terms of both data gathering and modelling practises.

  5. Uncertainty quantification and validation of 3D lattice scaffolds for computer-aided biomedical applications.

    PubMed

    Gorguluarslan, Recep M; Choi, Seung-Kyum; Saldana, Christopher J

    2017-07-01

    A methodology is proposed for uncertainty quantification and validation to accurately predict the mechanical response of lattice structures used in the design of scaffolds. Effective structural properties of the scaffolds are characterized using a developed multi-level stochastic upscaling process that propagates the quantified uncertainties at strut level to the lattice structure level. To obtain realistic simulation models for the stochastic upscaling process and minimize the experimental cost, high-resolution finite element models of individual struts were reconstructed from the micro-CT scan images of lattice structures which are fabricated by selective laser melting. The upscaling method facilitates the process of determining homogenized strut properties to reduce the computational cost of the detailed simulation model for the scaffold. Bayesian Information Criterion is utilized to quantify the uncertainties with parametric distributions based on the statistical data obtained from the reconstructed strut models. A systematic validation approach that can minimize the experimental cost is also developed to assess the predictive capability of the stochastic upscaling method used at the strut level and lattice structure level. In comparison with physical compression test results, the proposed methodology of linking the uncertainty quantification with the multi-level stochastic upscaling method enabled an accurate prediction of the elastic behavior of the lattice structure with minimal experimental cost by accounting for the uncertainties induced by the additive manufacturing process. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Special Issue on Uncertainty Quantification in Multiscale System Design and Simulation

    DOE PAGES

    Wang, Yan; Swiler, Laura

    2017-09-07

    The importance of uncertainty has been recognized in various modeling, simulation, and analysis applications, where inherent assumptions and simplifications affect the accuracy of model predictions for physical phenomena. As model predictions are now heavily relied upon for simulation-based system design, which includes new materials, vehicles, mechanical and civil structures, and even new drugs, wrong model predictions could potentially cause catastrophic consequences. Therefore, uncertainty and associated risks due to model errors should be quantified to support robust systems engineering.

  7. Special Issue on Uncertainty Quantification in Multiscale System Design and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yan; Swiler, Laura

    The importance of uncertainty has been recognized in various modeling, simulation, and analysis applications, where inherent assumptions and simplifications affect the accuracy of model predictions for physical phenomena. As model predictions are now heavily relied upon for simulation-based system design, which includes new materials, vehicles, mechanical and civil structures, and even new drugs, wrong model predictions could potentially cause catastrophic consequences. Therefore, uncertainty and associated risks due to model errors should be quantified to support robust systems engineering.

  8. Full uncertainty quantification of N2O and NO emissions using the biogeochemical model LandscapeDNDC on site and regional scale

    NASA Astrophysics Data System (ADS)

    Haas, Edwin; Santabarbara, Ignacio; Kiese, Ralf; Butterbach-Bahl, Klaus

    2017-04-01

    Numerical simulation models are increasingly used to estimate greenhouse gas emissions at site to regional / national scale and are outlined as the most advanced methodology (Tier 3) in the framework of UNFCCC reporting. Process-based models incorporate the major processes of the carbon and nitrogen cycle of terrestrial ecosystems and are thus thought to be widely applicable at various conditions and spatial scales. Process based modelling requires high spatial resolution input data on soil properties, climate drivers and management information. The acceptance of model based inventory calculations depends on the assessment of the inventory's uncertainty (model, input data and parameter induced uncertainties). In this study we fully quantify the uncertainty in modelling soil N2O and NO emissions from arable, grassland and forest soils using the biogeochemical model LandscapeDNDC. We address model induced uncertainty (MU) by contrasting two different soil biogeochemistry modules within LandscapeDNDC. The parameter induced uncertainty (PU) was assessed by using joint parameter distributions for key parameters describing microbial C and N turnover processes as obtained by different Bayesian calibration studies for each model configuration. Input data induced uncertainty (DU) was addressed by Bayesian calibration of soil properties, climate drivers and agricultural management practices data. For the MU, DU and PU we performed several hundred simulations each to contribute to the individual uncertainty assessment. For the overall uncertainty quantification we assessed the model prediction probability, followed by sampled sets of input datasets and parameter distributions. Statistical analysis of the simulation results have been used to quantify the overall full uncertainty of the modelling approach. With this study we can contrast the variation in model results to the different sources of uncertainties for each ecosystem. Further we have been able to perform a fully uncertainty analysis for modelling N2O and NO emissions from arable, grassland and forest soils necessary for the comprehensibility of modelling results. We have applied the methodology to a regional inventory to assess the overall modelling uncertainty for a regional N2O and NO emissions inventory for the state of Saxony, Germany.

  9. Uncertainty Quantification of CFD Data Generated for a Model Scramjet Isolator Flowfield

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Axdahl, E. L.

    2017-01-01

    Computational fluid dynamics is now considered to be an indispensable tool for the design and development of scramjet engine components. Unfortunately, the quantification of uncertainties is rarely addressed with anything other than sensitivity studies, so the degree of confidence associated with the numerical results remains exclusively with the subject matter expert that generated them. This practice must be replaced with a formal uncertainty quantification process for computational fluid dynamics to play an expanded role in the system design, development, and flight certification process. Given the limitations of current hypersonic ground test facilities, this expanded role is believed to be a requirement by some in the hypersonics community if scramjet engines are to be given serious consideration as a viable propulsion system. The present effort describes a simple, relatively low cost, nonintrusive approach to uncertainty quantification that includes the basic ingredients required to handle both aleatoric (random) and epistemic (lack of knowledge) sources of uncertainty. The nonintrusive nature of the approach allows the computational fluid dynamicist to perform the uncertainty quantification with the flow solver treated as a "black box". Moreover, a large fraction of the process can be automated, allowing the uncertainty assessment to be readily adapted into the engineering design and development workflow. In the present work, the approach is applied to a model scramjet isolator problem where the desire is to validate turbulence closure models in the presence of uncertainty. In this context, the relevant uncertainty sources are determined and accounted for to allow the analyst to delineate turbulence model-form errors from other sources of uncertainty associated with the simulation of the facility flow.

  10. Uncertainty Quantification given Discontinuous Climate Model Response and a Limited Number of Model Runs

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Safta, C.; Debusschere, B.; Najm, H.

    2010-12-01

    Uncertainty quantification in complex climate models is challenged by the sparsity of available climate model predictions due to the high computational cost of model runs. Another feature that prevents classical uncertainty analysis from being readily applicable is bifurcative behavior in climate model response with respect to certain input parameters. A typical example is the Atlantic Meridional Overturning Circulation. The predicted maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. We outline a methodology for uncertainty quantification given discontinuous model response and a limited number of model runs. Our approach is two-fold. First we detect the discontinuity with Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve shape and location for arbitrarily distributed input parameter values. Then, we construct spectral representations of uncertainty, using Polynomial Chaos (PC) expansions on either side of the discontinuity curve, leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification. The approach is enabled by a Rosenblatt transformation that maps each side of the discontinuity to regular domains where desirable orthogonality properties for the spectral bases hold. We obtain PC modes by either orthogonal projection or Bayesian inference, and argue for a hybrid approach that targets a balance between the accuracy provided by the orthogonal projection and the flexibility provided by the Bayesian inference - where the latter allows obtaining reasonable expansions without extra forward model runs. The model output, and its associated uncertainty at specific design points, are then computed by taking an ensemble average over PC expansions corresponding to possible realizations of the discontinuity curve. The methodology is tested on synthetic examples of discontinuous model data with adjustable sharpness and structure. This work was supported by the Sandia National Laboratories Seniors’ Council LDRD (Laboratory Directed Research and Development) program. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.

  11. Approximating prediction uncertainty for random forest regression models

    Treesearch

    John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne

    2016-01-01

    Machine learning approaches such as random forest have increased for the spatial modeling and mapping of continuous variables. Random forest is a non-parametric ensemble approach, and unlike traditional regression approaches there is no direct quantification of prediction error. Understanding prediction uncertainty is important when using model-based continuous maps as...

  12. Reliable estimates of predictive uncertainty for an Alpine catchment using a non-parametric methodology

    NASA Astrophysics Data System (ADS)

    Matos, José P.; Schaefli, Bettina; Schleiss, Anton J.

    2017-04-01

    Uncertainty affects hydrological modelling efforts from the very measurements (or forecasts) that serve as inputs to the more or less inaccurate predictions that are produced. Uncertainty is truly inescapable in hydrology and yet, due to the theoretical and technical hurdles associated with its quantification, it is at times still neglected or estimated only qualitatively. In recent years the scientific community has made a significant effort towards quantifying this hydrologic prediction uncertainty. Despite this, most of the developed methodologies can be computationally demanding, are complex from a theoretical point of view, require substantial expertise to be employed, and are constrained by a number of assumptions about the model error distribution. These assumptions limit the reliability of many methods in case of errors that show particular cases of non-normality, heteroscedasticity, or autocorrelation. The present contribution builds on a non-parametric data-driven approach that was developed for uncertainty quantification in operational (real-time) forecasting settings. The approach is based on the concept of Pareto optimality and can be used as a standalone forecasting tool or as a postprocessor. By virtue of its non-parametric nature and a general operating principle, it can be applied directly and with ease to predictions of streamflow, water stage, or even accumulated runoff. Also, it is a methodology capable of coping with high heteroscedasticity and seasonal hydrological regimes (e.g. snowmelt and rainfall driven events in the same catchment). Finally, the training and operation of the model are very fast, making it a tool particularly adapted to operational use. To illustrate its practical use, the uncertainty quantification method is coupled with a process-based hydrological model to produce statistically reliable forecasts for an Alpine catchment located in Switzerland. Results are presented and discussed in terms of their reliability and resolution.

  13. Satellite Re-entry Modeling and Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Horsley, M.

    2012-09-01

    LEO trajectory modeling is a fundamental aerospace capability and has applications in many areas of aerospace, such as maneuver planning, sensor scheduling, re-entry prediction, collision avoidance, risk analysis, and formation flying. Somewhat surprisingly, modeling the trajectory of an object in low Earth orbit is still a challenging task. This is primarily due to the large uncertainty in the upper atmospheric density, about 15-20% (1-sigma) for most thermosphere models. Other contributions come from our inability to precisely model future solar and geomagnetic activities, the potentially unknown shape, material construction and attitude history of the satellite, and intermittent, noisy tracking data. Current methods to predict a satellite's re-entry trajectory typically involve making a single prediction, with the uncertainty dealt with in an ad-hoc manner, usually based on past experience. However, due to the extreme speed of a LEO satellite, even small uncertainties in the re-entry time translate into a very large uncertainty in the location of the re-entry event. Currently, most methods simply update the re-entry estimate on a regular basis. This results in a wide range of estimates that are literally spread over the entire globe. With no understanding of the underlying distribution of potential impact points, the sequence of impact points predicted by the current methodology are largely useless until just a few hours before re-entry. This paper will discuss the development of a set of the High Performance Computing (HPC)-based capabilities to support near real-time quantification of the uncertainty inherent in uncontrolled satellite re-entries. An appropriate management of the uncertainties is essential for a rigorous treatment of the re-entry/LEO trajectory problem. The development of HPC-based tools for re-entry analysis is important as it will allow a rigorous and robust approach to risk assessment by decision makers in an operational setting. Uncertainty quantification results from the recent uncontrolled re-entry of the Phobos-Grunt satellite will be presented and discussed. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  14. Model Uncertainty Quantification Methods In Data Assimilation

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; Marshall, L. A.; Sharma, A.; Moradkhani, H.

    2017-12-01

    Data Assimilation involves utilising observations to improve model predictions in a seamless and statistically optimal fashion. Its applications are wide-ranging; from improving weather forecasts to tracking targets such as in the Apollo 11 mission. The use of Data Assimilation methods in high dimensional complex geophysical systems is an active area of research, where there exists many opportunities to enhance existing methodologies. One of the central challenges is in model uncertainty quantification; the outcome of any Data Assimilation study is strongly dependent on the uncertainties assigned to both observations and models. I focus on developing improved model uncertainty quantification methods that are applicable to challenging real world scenarios. These include developing methods for cases where the system states are only partially observed, where there is little prior knowledge of the model errors, and where the model error statistics are likely to be highly non-Gaussian.

  15. User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Coleman, Kayla; Gilkey, Lindsay N.

    Sandia’s Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility. In its simplest mode, Dakota can automate typical parameter variation studies through a generic interface to a physics-based computational model. This can lend efficiency and rigor to manual parameter perturbation studies already being conducted by analysts. However, Dakota also delivers advanced parametric analysis techniques enabling design exploration, optimization, model calibration, riskmore » analysis, and quantification of margins and uncertainty with such models. It directly supports verification and validation activities. Dakota algorithms enrich complex science and engineering models, enabling an analyst to answer crucial questions of - Sensitivity: Which are the most important input factors or parameters entering the simulation, and how do they influence key outputs?; Uncertainty: What is the uncertainty or variability in simulation output, given uncertainties in input parameters? How safe, reliable, robust, or variable is my system? (Quantification of margins and uncertainty, QMU); Optimization: What parameter values yield the best performing design or operating condition, given constraints? Calibration: What models and/or parameters best match experimental data? In general, Dakota is the Consortium for Advanced Simulation of Light Water Reactors (CASL) delivery vehicle for verification, validation, and uncertainty quantification (VUQ) algorithms. It permits ready application of the VUQ methods described above to simulation codes by CASL researchers, code developers, and application engineers.« less

  16. An efficient assisted history matching and uncertainty quantification workflow using Gaussian processes proxy models and variogram based sensitivity analysis: GP-VARS

    NASA Astrophysics Data System (ADS)

    Rana, Sachin; Ertekin, Turgay; King, Gregory R.

    2018-05-01

    Reservoir history matching is frequently viewed as an optimization problem which involves minimizing misfit between simulated and observed data. Many gradient and evolutionary strategy based optimization algorithms have been proposed to solve this problem which typically require a large number of numerical simulations to find feasible solutions. Therefore, a new methodology referred to as GP-VARS is proposed in this study which uses forward and inverse Gaussian processes (GP) based proxy models combined with a novel application of variogram analysis of response surface (VARS) based sensitivity analysis to efficiently solve high dimensional history matching problems. Empirical Bayes approach is proposed to optimally train GP proxy models for any given data. The history matching solutions are found via Bayesian optimization (BO) on forward GP models and via predictions of inverse GP model in an iterative manner. An uncertainty quantification method using MCMC sampling in conjunction with GP model is also presented to obtain a probabilistic estimate of reservoir properties and estimated ultimate recovery (EUR). An application of the proposed GP-VARS methodology on PUNQ-S3 reservoir is presented in which it is shown that GP-VARS provides history match solutions in approximately four times less numerical simulations as compared to the differential evolution (DE) algorithm. Furthermore, a comparison of uncertainty quantification results obtained by GP-VARS, EnKF and other previously published methods shows that the P50 estimate of oil EUR obtained by GP-VARS is in close agreement to the true values for the PUNQ-S3 reservoir.

  17. Plasticity models of material variability based on uncertainty quantification techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Reese E.; Rizzi, Francesco; Boyce, Brad

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQmore » techniques can be used in model selection and assessing the quality of calibrated physical parameters.« less

  18. Quantification of uncertainty in flood risk assessment for flood protection planning: a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Dittes, Beatrice; Špačková, Olga; Ebrahimian, Negin; Kaiser, Maria; Rieger, Wolfgang; Disse, Markus; Straub, Daniel

    2017-04-01

    Flood risk estimates are subject to significant uncertainties, e.g. due to limited records of historic flood events, uncertainty in flood modeling, uncertain impact of climate change or uncertainty in the exposure and loss estimates. In traditional design of flood protection systems, these uncertainties are typically just accounted for implicitly, based on engineering judgment. In the AdaptRisk project, we develop a fully quantitative framework for planning of flood protection systems under current and future uncertainties using quantitative pre-posterior Bayesian decision analysis. In this contribution, we focus on the quantification of the uncertainties and study their relative influence on the flood risk estimate and on the planning of flood protection systems. The following uncertainty components are included using a Bayesian approach: 1) inherent and statistical (i.e. limited record length) uncertainty; 2) climate uncertainty that can be learned from an ensemble of GCM-RCM models; 3) estimates of climate uncertainty components not covered in 2), such as bias correction, incomplete ensemble, local specifics not captured by the GCM-RCM models; 4) uncertainty in the inundation modelling; 5) uncertainty in damage estimation. We also investigate how these uncertainties are possibly reduced in the future when new evidence - such as new climate models, observed extreme events, and socio-economic data - becomes available. Finally, we look into how this new evidence influences the risk assessment and effectivity of flood protection systems. We demonstrate our methodology for a pre-alpine catchment in southern Germany: the Mangfall catchment in Bavaria that includes the city of Rosenheim, which suffered significant losses during the 2013 flood event.

  19. Multifidelity, Multidisciplinary Design Under Uncertainty with Non-Intrusive Polynomial Chaos

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Gumbert, Clyde

    2017-01-01

    The primary objective of this work is to develop an approach for multifidelity uncertainty quantification and to lay the framework for future design under uncertainty efforts. In this study, multifidelity is used to describe both the fidelity of the modeling of the physical systems, as well as the difference in the uncertainty in each of the models. For computational efficiency, a multifidelity surrogate modeling approach based on non-intrusive polynomial chaos using the point-collocation technique is developed for the treatment of both multifidelity modeling and multifidelity uncertainty modeling. Two stochastic model problems are used to demonstrate the developed methodologies: a transonic airfoil model and multidisciplinary aircraft analysis model. The results of both showed the multifidelity modeling approach was able to predict the output uncertainty predicted by the high-fidelity model as a significant reduction in computational cost.

  20. Estimation of the quantification uncertainty from flow injection and liquid chromatography transient signals in inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Laborda, Francisco; Medrano, Jesús; Castillo, Juan R.

    2004-06-01

    The quality of the quantitative results obtained from transient signals in high-performance liquid chromatography-inductively coupled plasma mass spectrometry (HPLC-ICPMS) and flow injection-inductively coupled plasma mass spectrometry (FI-ICPMS) was investigated under multielement conditions. Quantification methods were based on multiple-point calibration by simple and weighted linear regression, and double-point calibration (measurement of the baseline and one standard). An uncertainty model, which includes the main sources of uncertainty from FI-ICPMS and HPLC-ICPMS (signal measurement, sample flow rate and injection volume), was developed to estimate peak area uncertainties and statistical weights used in weighted linear regression. The behaviour of the ICPMS instrument was characterized in order to be considered in the model, concluding that the instrument works as a concentration detector when it is used to monitorize transient signals from flow injection or chromatographic separations. Proper quantification by the three calibration methods was achieved when compared to reference materials, although the double-point calibration allowed to obtain results of the same quality as the multiple-point calibration, shortening the calibration time. Relative expanded uncertainties ranged from 10-20% for concentrations around the LOQ to 5% for concentrations higher than 100 times the LOQ.

  1. Uncertainty Quantification and Certification Prediction of Low-Boom Supersonic Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Reuter, Bryan W.; Walker, Eric L.; Kleb, Bil; Park, Michael A.

    2014-01-01

    The primary objective of this work was to develop and demonstrate a process for accurate and efficient uncertainty quantification and certification prediction of low-boom, supersonic, transport aircraft. High-fidelity computational fluid dynamics models of multiple low-boom configurations were investigated including the Lockheed Martin SEEB-ALR body of revolution, the NASA 69 Delta Wing, and the Lockheed Martin 1021-01 configuration. A nonintrusive polynomial chaos surrogate modeling approach was used for reduced computational cost of propagating mixed, inherent (aleatory) and model-form (epistemic) uncertainty from both the computation fluid dynamics model and the near-field to ground level propagation model. A methodology has also been introduced to quantify the plausibility of a design to pass a certification under uncertainty. Results of this study include the analysis of each of the three configurations of interest under inviscid and fully turbulent flow assumptions. A comparison of the uncertainty outputs and sensitivity analyses between the configurations is also given. The results of this study illustrate the flexibility and robustness of the developed framework as a tool for uncertainty quantification and certification prediction of low-boom, supersonic aircraft.

  2. Quantification of source uncertainties in Seismic Probabilistic Tsunami Hazard Analysis (SPTHA)

    NASA Astrophysics Data System (ADS)

    Selva, J.; Tonini, R.; Molinari, I.; Tiberti, M. M.; Romano, F.; Grezio, A.; Melini, D.; Piatanesi, A.; Basili, R.; Lorito, S.

    2016-06-01

    We propose a procedure for uncertainty quantification in Probabilistic Tsunami Hazard Analysis (PTHA), with a special emphasis on the uncertainty related to statistical modelling of the earthquake source in Seismic PTHA (SPTHA), and on the separate treatment of subduction and crustal earthquakes (treated as background seismicity). An event tree approach and ensemble modelling are used in spite of more classical approaches, such as the hazard integral and the logic tree. This procedure consists of four steps: (1) exploration of aleatory uncertainty through an event tree, with alternative implementations for exploring epistemic uncertainty; (2) numerical computation of tsunami generation and propagation up to a given offshore isobath; (3) (optional) site-specific quantification of inundation; (4) simultaneous quantification of aleatory and epistemic uncertainty through ensemble modelling. The proposed procedure is general and independent of the kind of tsunami source considered; however, we implement step 1, the event tree, specifically for SPTHA, focusing on seismic source uncertainty. To exemplify the procedure, we develop a case study considering seismic sources in the Ionian Sea (central-eastern Mediterranean Sea), using the coasts of Southern Italy as a target zone. The results show that an efficient and complete quantification of all the uncertainties is feasible even when treating a large number of potential sources and a large set of alternative model formulations. We also find that (i) treating separately subduction and background (crustal) earthquakes allows for optimal use of available information and for avoiding significant biases; (ii) both subduction interface and crustal faults contribute to the SPTHA, with different proportions that depend on source-target position and tsunami intensity; (iii) the proposed framework allows sensitivity and deaggregation analyses, demonstrating the applicability of the method for operational assessments.

  3. On uncertainty quantification of lithium-ion batteries: Application to an LiC6/LiCoO2 cell

    NASA Astrophysics Data System (ADS)

    Hadigol, Mohammad; Maute, Kurt; Doostan, Alireza

    2015-12-01

    In this work, a stochastic, physics-based model for Lithium-ion batteries (LIBs) is presented in order to study the effects of parametric model uncertainties on the cell capacity, voltage, and concentrations. To this end, the proposed uncertainty quantification (UQ) approach, based on sparse polynomial chaos expansions, relies on a small number of battery simulations. Within this UQ framework, the identification of most important uncertainty sources is achieved by performing a global sensitivity analysis via computing the so-called Sobol' indices. Such information aids in designing more efficient and targeted quality control procedures, which consequently may result in reducing the LIB production cost. An LiC6/LiCoO2 cell with 19 uncertain parameters discharged at 0.25C, 1C and 4C rates is considered to study the performance and accuracy of the proposed UQ approach. The results suggest that, for the considered cell, the battery discharge rate is a key factor affecting not only the performance variability of the cell, but also the determination of most important random inputs.

  4. Waveform-based Bayesian full moment tensor inversion and uncertainty determination for the induced seismicity in an oil/gas field

    NASA Astrophysics Data System (ADS)

    Gu, Chen; Marzouk, Youssef M.; Toksöz, M. Nafi

    2018-03-01

    Small earthquakes occur due to natural tectonic motions and are induced by oil and gas production processes. In many oil/gas fields and hydrofracking processes, induced earthquakes result from fluid extraction or injection. The locations and source mechanisms of these earthquakes provide valuable information about the reservoirs. Analysis of induced seismic events has mostly assumed a double-couple source mechanism. However, recent studies have shown a non-negligible percentage of non-double-couple components of source moment tensors in hydraulic fracturing events, assuming a full moment tensor source mechanism. Without uncertainty quantification of the moment tensor solution, it is difficult to determine the reliability of these source models. This study develops a Bayesian method to perform waveform-based full moment tensor inversion and uncertainty quantification for induced seismic events, accounting for both location and velocity model uncertainties. We conduct tests with synthetic events to validate the method, and then apply our newly developed Bayesian inversion approach to real induced seismicity in an oil/gas field in the sultanate of Oman—determining the uncertainties in the source mechanism and in the location of that event.

  5. Uncertainty Reduction using Bayesian Inference and Sensitivity Analysis: A Sequential Approach to the NASA Langley Uncertainty Quantification Challenge

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar

    2016-01-01

    This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.

  6. Uncertainty quantification of overpressure buildup through inverse modeling of compaction processes in sedimentary basins

    NASA Astrophysics Data System (ADS)

    Colombo, Ivo; Porta, Giovanni M.; Ruffo, Paolo; Guadagnini, Alberto

    2017-03-01

    This study illustrates a procedure conducive to a preliminary risk analysis of overpressure development in sedimentary basins characterized by alternating depositional events of sandstone and shale layers. The approach rests on two key elements: (1) forward modeling of fluid flow and compaction, and (2) application of a model-complexity reduction technique based on a generalized polynomial chaos expansion (gPCE). The forward model considers a one-dimensional vertical compaction processes. The gPCE model is then used in an inverse modeling context to obtain efficient model parameter estimation and uncertainty quantification. The methodology is applied to two field settings considered in previous literature works, i.e. the Venture Field (Scotian Shelf, Canada) and the Navarin Basin (Bering Sea, Alaska, USA), relying on available porosity and pressure information for model calibration. It is found that the best result is obtained when porosity and pressure data are considered jointly in the model calibration procedure. Uncertainty propagation from unknown input parameters to model outputs, such as pore pressure vertical distribution, is investigated and quantified. This modeling strategy enables one to quantify the relative importance of key phenomena governing the feedback between sediment compaction and fluid flow processes and driving the buildup of fluid overpressure in stratified sedimentary basins characterized by the presence of low-permeability layers. The results here illustrated (1) allow for diagnosis of the critical role played by the parameters of quantitative formulations linking porosity and permeability in compacted shales and (2) provide an explicit and detailed quantification of the effects of their uncertainty in field settings.

  7. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-03-01

    We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  8. Uncertainty Quantification in High Throughput Screening: Applications to Models of Endocrine Disruption, Cytotoxicity, and Zebrafish Development (GRC Drug Safety)

    EPA Science Inventory

    Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of bioche...

  9. Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem

    DOE PAGES

    Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; ...

    2015-01-01

    In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.

  10. Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.

    In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.

  11. WE-D-BRE-07: Variance-Based Sensitivity Analysis to Quantify the Impact of Biological Uncertainties in Particle Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamp, F.; Brueningk, S.C.; Wilkens, J.J.

    Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g.more » RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of uncertainties. Supported by DFG grant WI 3745/1-1 and DFG cluster of excellence: Munich-Centre for Advanced Photonics.« less

  12. Uncertainty quantification in downscaling procedures for effective decisions in energy systems

    NASA Astrophysics Data System (ADS)

    Constantinescu, E. M.

    2010-12-01

    Weather is a major driver both of energy supply and demand, and with the massive adoption of renewable energy sources and changing economic and producer-consumer paradigms, the management of the next-generation energy systems is becoming ever more challenging. The operational and planning decisions in energy systems are guided by efficiency and reliability, and therefore a central role in these decisions will be played by the ability to obtain weather condition forecasts with accurate uncertainty estimates. The appropriate temporal and spatial resolutions needed for effective decision-making, be it operational or planning, is not clear. It is arguably certain however, that such temporal scales as hourly variations of temperature or wind conditions and ramp events are essential in this process. Planning activities involve decade or decades-long projections of weather. One sensible way to achieve this is to embed regional weather models in a global climate system. This strategy acts as a downscaling procedure. Uncertainty modeling techniques must be developed in order to quantify and minimize forecast errors as well as target variables that impact the decision-making process the most. We discuss the challenges of obtaining a realistic uncertainty quantification estimate using mathematical algorithms based on scalable matrix-free computations and physics-based statistical models. The process of making decisions for energy management systems based on future weather scenarios is a very complex problem. We shall focus on the challenges in generating wind power predictions based on regional weather predictions, and discuss the implications of making the common assumptions about the uncertainty models.

  13. Spatial Uncertainty Modeling of Fuzzy Information in Images for Pattern Classification

    PubMed Central

    Pham, Tuan D.

    2014-01-01

    The modeling of the spatial distribution of image properties is important for many pattern recognition problems in science and engineering. Mathematical methods are needed to quantify the variability of this spatial distribution based on which a decision of classification can be made in an optimal sense. However, image properties are often subject to uncertainty due to both incomplete and imprecise information. This paper presents an integrated approach for estimating the spatial uncertainty of vagueness in images using the theory of geostatistics and the calculus of probability measures of fuzzy events. Such a model for the quantification of spatial uncertainty is utilized as a new image feature extraction method, based on which classifiers can be trained to perform the task of pattern recognition. Applications of the proposed algorithm to the classification of various types of image data suggest the usefulness of the proposed uncertainty modeling technique for texture feature extraction. PMID:25157744

  14. Adjoint-Based Uncertainty Quantification with MCNP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seifried, Jeffrey E.

    2011-09-01

    This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence inmore » the simulation is acquired.« less

  15. Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Nikbay, Melike; Heeg, Jennifer

    2017-01-01

    This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.

  16. Quantification of key long-term risks at CO₂ sequestration sites: Latest results from US DOE's National Risk Assessment Partnership (NRAP) Project

    DOE PAGES

    Pawar, Rajesh; Bromhal, Grant; Carroll, Susan; ...

    2014-12-31

    Risk assessment for geologic CO₂ storage including quantification of risks is an area of active investigation. The National Risk Assessment Partnership (NRAP) is a US-Department of Energy (US-DOE) effort focused on developing a defensible, science-based methodology and platform for quantifying risk profiles at geologic CO₂ sequestration sites. NRAP has been developing a methodology that centers round development of an integrated assessment model (IAM) using system modeling approach to quantify risks and risk profiles. The IAM has been used to calculate risk profiles with a few key potential impacts due to potential CO₂ and brine leakage. The simulation results are alsomore » used to determine long-term storage security relationships and compare the long-term storage effectiveness to IPCC storage permanence goal. Additionally, we also demonstrate application of IAM for uncertainty quantification in order to determine parameters to which the uncertainty in model results is most sensitive.« less

  17. Towards Characterization, Modeling, and Uncertainty Quantification in Multi-scale Mechanics of Oragnic-rich Shales

    NASA Astrophysics Data System (ADS)

    Abedi, S.; Mashhadian, M.; Noshadravan, A.

    2015-12-01

    Increasing the efficiency and sustainability in operation of hydrocarbon recovery from organic-rich shales requires a fundamental understanding of chemomechanical properties of organic-rich shales. This understanding is manifested in form of physics-bases predictive models capable of capturing highly heterogeneous and multi-scale structure of organic-rich shale materials. In this work we present a framework of experimental characterization, micromechanical modeling, and uncertainty quantification that spans from nanoscale to macroscale. Application of experiments such as coupled grid nano-indentation and energy dispersive x-ray spectroscopy and micromechanical modeling attributing the role of organic maturity to the texture of the material, allow us to identify unique clay mechanical properties among different samples that are independent of maturity of shale formations and total organic content. The results can then be used to inform the physically-based multiscale model for organic rich shales consisting of three levels that spans from the scale of elementary building blocks (e.g. clay minerals in clay-dominated formations) of organic rich shales to the scale of the macroscopic inorganic/organic hard/soft inclusion composite. Although this approach is powerful in capturing the effective properties of organic-rich shale in an average sense, it does not account for the uncertainty in compositional and mechanical model parameters. Thus, we take this model one step forward by systematically incorporating the main sources of uncertainty in modeling multiscale behavior of organic-rich shales. In particular we account for the uncertainty in main model parameters at different scales such as porosity, elastic properties and mineralogy mass percent. To that end, we use Maximum Entropy Principle and random matrix theory to construct probabilistic descriptions of model inputs based on available information. The Monte Carlo simulation is then carried out to propagate the uncertainty and consequently construct probabilistic descriptions of properties at multiple length-scales. The combination of experimental characterization and stochastic multi-scale modeling presented in this work improves the robustness in the prediction of essential subsurface parameters in engineering scale.

  18. Grain growth prediction based on data assimilation by implementing 4DVar on multi-phase-field model

    NASA Astrophysics Data System (ADS)

    Ito, Shin-ichi; Nagao, Hiromichi; Kasuya, Tadashi; Inoue, Junya

    2017-12-01

    We propose a method to predict grain growth based on data assimilation by using a four-dimensional variational method (4DVar). When implemented on a multi-phase-field model, the proposed method allows us to calculate the predicted grain structures and uncertainties in them that depend on the quality and quantity of the observational data. We confirm through numerical tests involving synthetic data that the proposed method correctly reproduces the true phase-field assumed in advance. Furthermore, it successfully quantifies uncertainties in the predicted grain structures, where such uncertainty quantifications provide valuable information to optimize the experimental design.

  19. Uncertainty Quantification of Evapotranspiration and Infiltration from Modeling and Historic Time Series at the Savannah River F-Area

    NASA Astrophysics Data System (ADS)

    Faybishenko, B.; Flach, G. P.

    2012-12-01

    The objectives of this presentation are: (a) to illustrate the application of Monte Carlo and fuzzy-probabilistic approaches for uncertainty quantification (UQ) in predictions of potential evapotranspiration (PET), actual evapotranspiration (ET), and infiltration (I), using uncertain hydrological or meteorological time series data, and (b) to compare the results of these calculations with those from field measurements at the U.S. Department of Energy Savannah River Site (SRS), near Aiken, South Carolina, USA. The UQ calculations include the evaluation of aleatory (parameter uncertainty) and epistemic (model) uncertainties. The effect of aleatory uncertainty is expressed by assigning the probability distributions of input parameters, using historical monthly averaged data from the meteorological station at the SRS. The combined effect of aleatory and epistemic uncertainties on the UQ of PET, ET, and Iis then expressed by aggregating the results of calculations from multiple models using a p-box and fuzzy numbers. The uncertainty in PETis calculated using the Bair-Robertson, Blaney-Criddle, Caprio, Hargreaves-Samani, Hamon, Jensen-Haise, Linacre, Makkink, Priestly-Taylor, Penman, Penman-Monteith, Thornthwaite, and Turc models. Then, ET is calculated from the modified Budyko model, followed by calculations of I from the water balance equation. We show that probabilistic and fuzzy-probabilistic calculations using multiple models generate the PET, ET, and Idistributions, which are well within the range of field measurements. We also show that a selection of a subset of models can be used to constrain the uncertainty quantification of PET, ET, and I.

  20. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    NASA Astrophysics Data System (ADS)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.

  1. Complex Systems Models and Their Applications: Towards a New Science of Verification, Validation & Uncertainty Quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsao, Jeffrey Y.; Trucano, Timothy G.; Kleban, Stephen D.

    This report contains the written footprint of a Sandia-hosted workshop held in Albuquerque, New Mexico, June 22-23, 2016 on “Complex Systems Models and Their Applications: Towards a New Science of Verification, Validation and Uncertainty Quantification,” as well as of pre-work that fed into the workshop. The workshop’s intent was to explore and begin articulating research opportunities at the intersection between two important Sandia communities: the complex systems (CS) modeling community, and the verification, validation and uncertainty quantification (VVUQ) community The overarching research opportunity (and challenge) that we ultimately hope to address is: how can we quantify the credibility of knowledgemore » gained from complex systems models, knowledge that is often incomplete and interim, but will nonetheless be used, sometimes in real-time, by decision makers?« less

  2. Stochastic Analysis and Design of Heterogeneous Microstructural Materials System

    NASA Astrophysics Data System (ADS)

    Xu, Hongyi

    Advanced materials system refers to new materials that are comprised of multiple traditional constituents but complex microstructure morphologies, which lead to superior properties over the conventional materials. To accelerate the development of new advanced materials system, the objective of this dissertation is to develop a computational design framework and the associated techniques for design automation of microstructure materials systems, with an emphasis on addressing the uncertainties associated with the heterogeneity of microstructural materials. Five key research tasks are identified: design representation, design evaluation, design synthesis, material informatics and uncertainty quantification. Design representation of microstructure includes statistical characterization and stochastic reconstruction. This dissertation develops a new descriptor-based methodology, which characterizes 2D microstructures using descriptors of composition, dispersion and geometry. Statistics of 3D descriptors are predicted based on 2D information to enable 2D-to-3D reconstruction. An efficient sequential reconstruction algorithm is developed to reconstruct statistically equivalent random 3D digital microstructures. In design evaluation, a stochastic decomposition and reassembly strategy is developed to deal with the high computational costs and uncertainties induced by material heterogeneity. The properties of Representative Volume Elements (RVE) are predicted by stochastically reassembling SVE elements with stochastic properties into a coarse representation of the RVE. In design synthesis, a new descriptor-based design framework is developed, which integrates computational methods of microstructure characterization and reconstruction, sensitivity analysis, Design of Experiments (DOE), metamodeling and optimization the enable parametric optimization of the microstructure for achieving the desired material properties. Material informatics is studied to efficiently reduce the dimension of microstructure design space. This dissertation develops a machine learning-based methodology to identify the key microstructure descriptors that highly impact properties of interest. In uncertainty quantification, a comparative study on data-driven random process models is conducted to provide guidance for choosing the most accurate model in statistical uncertainty quantification. Two new goodness-of-fit metrics are developed to provide quantitative measurements of random process models' accuracy. The benefits of the proposed methods are demonstrated by the example of designing the microstructure of polymer nanocomposites. This dissertation provides material-generic, intelligent modeling/design methodologies and techniques to accelerate the process of analyzing and designing new microstructural materials system.

  3. Self-optimized construction of transition rate matrices from accelerated atomistic simulations with Bayesian uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Swinburne, Thomas D.; Perez, Danny

    2018-05-01

    A massively parallel method to build large transition rate matrices from temperature-accelerated molecular dynamics trajectories is presented. Bayesian Markov model analysis is used to estimate the expected residence time in the known state space, providing crucial uncertainty quantification for higher-scale simulation schemes such as kinetic Monte Carlo or cluster dynamics. The estimators are additionally used to optimize where exploration is performed and the degree of temperature acceleration on the fly, giving an autonomous, optimal procedure to explore the state space of complex systems. The method is tested against exactly solvable models and used to explore the dynamics of C15 interstitial defects in iron. Our uncertainty quantification scheme allows for accurate modeling of the evolution of these defects over timescales of several seconds.

  4. VAVUQ, Python and Matlab freeware for Verification and Validation, Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Courtney, J. E.; Zamani, K.; Bombardelli, F. A.; Fleenor, W. E.

    2015-12-01

    A package of scripts is presented for automated Verification and Validation (V&V) and Uncertainty Quantification (UQ) for engineering codes that approximate Partial Differential Equations (PDFs). The code post-processes model results to produce V&V and UQ information. This information can be used to assess model performance. Automated information on code performance can allow for a systematic methodology to assess the quality of model approximations. The software implements common and accepted code verification schemes. The software uses the Method of Manufactured Solutions (MMS), the Method of Exact Solution (MES), Cross-Code Verification, and Richardson Extrapolation (RE) for solution (calculation) verification. It also includes common statistical measures that can be used for model skill assessment. Complete RE can be conducted for complex geometries by implementing high-order non-oscillating numerical interpolation schemes within the software. Model approximation uncertainty is quantified by calculating lower and upper bounds of numerical error from the RE results. The software is also able to calculate the Grid Convergence Index (GCI), and to handle adaptive meshes and models that implement mixed order schemes. Four examples are provided to demonstrate the use of the software for code and solution verification, model validation and uncertainty quantification. The software is used for code verification of a mixed-order compact difference heat transport solver; the solution verification of a 2D shallow-water-wave solver for tidal flow modeling in estuaries; the model validation of a two-phase flow computation in a hydraulic jump compared to experimental data; and numerical uncertainty quantification for 3D CFD modeling of the flow patterns in a Gust erosion chamber.

  5. Uncertainty Quantification in Aeroelasticity

    NASA Astrophysics Data System (ADS)

    Beran, Philip; Stanford, Bret; Schrock, Christopher

    2017-01-01

    Physical interactions between a fluid and structure, potentially manifested as self-sustained or divergent oscillations, can be sensitive to many parameters whose values are uncertain. Of interest here are aircraft aeroelastic interactions, which must be accounted for in aircraft certification and design. Deterministic prediction of these aeroelastic behaviors can be difficult owing to physical and computational complexity. New challenges are introduced when physical parameters and elements of the modeling process are uncertain. By viewing aeroelasticity through a nondeterministic prism, where key quantities are assumed stochastic, one may gain insights into how to reduce system uncertainty, increase system robustness, and maintain aeroelastic safety. This article reviews uncertainty quantification in aeroelasticity using traditional analytical techniques not reliant on computational fluid dynamics; compares and contrasts this work with emerging methods based on computational fluid dynamics, which target richer physics; and reviews the state of the art in aeroelastic optimization under uncertainty. Barriers to continued progress, for example, the so-called curse of dimensionality, are discussed.

  6. Quantification of water resources uncertainties in the Luvuvhu sub-basin of the Limpopo river basin

    NASA Astrophysics Data System (ADS)

    Oosthuizen, N.; Hughes, D.; Kapangaziwiri, E.; Mwenge Kahinda, J.; Mvandaba, V.

    2018-06-01

    In the absence of historical observed data, models are generally used to describe the different hydrological processes and generate data and information that will inform management and policy decision making. Ideally, any hydrological model should be based on a sound conceptual understanding of the processes in the basin and be backed by quantitative information for the parameterization of the model. However, these data are often inadequate in many sub-basins, necessitating the incorporation of the uncertainty related to the estimation process. This paper reports on the impact of the uncertainty related to the parameterization of the Pitman monthly model and water use data on the estimates of the water resources of the Luvuvhu, a sub-basin of the Limpopo river basin. The study reviews existing information sources associated with the quantification of water balance components and gives an update of water resources of the sub-basin. The flows generated by the model at the outlet of the basin were between 44.03 Mm3 and 45.48 Mm3 per month when incorporating +20% uncertainty to the main physical runoff generating parameters. The total predictive uncertainty of the model increased when water use data such as small farm and large reservoirs and irrigation were included. The dam capacity data was considered at an average of 62% uncertainty mainly as a result of the large differences between the available information in the national water resources database and that digitised from satellite imagery. Water used by irrigated crops was estimated with an average of about 50% uncertainty. The mean simulated monthly flows were between 38.57 Mm3 and 54.83 Mm3 after the water use uncertainty was added. However, it is expected that the uncertainty could be reduced by using higher resolution remote sensing imagery.

  7. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).

  8. Climate change impacts on tree ranges: model intercomparison facilitates understanding and quantification of uncertainty.

    PubMed

    Cheaib, Alissar; Badeau, Vincent; Boe, Julien; Chuine, Isabelle; Delire, Christine; Dufrêne, Eric; François, Christophe; Gritti, Emmanuel S; Legay, Myriam; Pagé, Christian; Thuiller, Wilfried; Viovy, Nicolas; Leadley, Paul

    2012-06-01

    Model-based projections of shifts in tree species range due to climate change are becoming an important decision support tool for forest management. However, poorly evaluated sources of uncertainty require more scrutiny before relying heavily on models for decision-making. We evaluated uncertainty arising from differences in model formulations of tree response to climate change based on a rigorous intercomparison of projections of tree distributions in France. We compared eight models ranging from niche-based to process-based models. On average, models project large range contractions of temperate tree species in lowlands due to climate change. There was substantial disagreement between models for temperate broadleaf deciduous tree species, but differences in the capacity of models to account for rising CO(2) impacts explained much of the disagreement. There was good quantitative agreement among models concerning the range contractions for Scots pine. For the dominant Mediterranean tree species, Holm oak, all models foresee substantial range expansion. © 2012 Blackwell Publishing Ltd/CNRS.

  9. Model Update of a Micro Air Vehicle (MAV) Flexible Wing Frame with Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Reaves, Mercedes C.; Horta, Lucas G.; Waszak, Martin R.; Morgan, Benjamin G.

    2004-01-01

    This paper describes a procedure to update parameters in the finite element model of a Micro Air Vehicle (MAV) to improve displacement predictions under aerodynamics loads. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with Multidisciplinary Design Optimization (MDO) is used to modify key model parameters. Static test data collected using photogrammetry are used to correlate with model predictions. Results show significant improvements in model predictions after parameters are updated; however, computed probabilities values indicate low confidence in updated values and/or model structure errors. Lessons learned in the areas of wing design, test procedures, modeling approaches with geometric nonlinearities, and uncertainties quantification are all documented.

  10. Learning and Information Approaches for Inference in Dynamic Data-Driven Geophysical Applications

    NASA Astrophysics Data System (ADS)

    Ravela, S.

    2015-12-01

    Many Geophysical inference problems are characterized by non-linear processes, high-dimensional models and complex uncertainties. A dynamic coupling between models, estimation, and sampling is typically sought to efficiently characterize and reduce uncertainty. This process is however fraught with several difficulties. Among them, the key difficulties are the ability to deal with model errors, efficacy of uncertainty quantification and data assimilation. In this presentation, we present three key ideas from learning and intelligent systems theory and apply them to two geophysical applications. The first idea is the use of Ensemble Learning to compensate for model error, the second is to develop tractable Information Theoretic Learning to deal with non-Gaussianity in inference, and the third is a Manifold Resampling technique for effective uncertainty quantification. We apply these methods, first to the development of a cooperative autonomous observing system using sUAS for studying coherent structures. We apply this to Second, we apply this to the problem of quantifying risk from hurricanes and storm surges in a changing climate. Results indicate that learning approaches can enable new effectiveness in cases where standard approaches to model reduction, uncertainty quantification and data assimilation fail.

  11. A geostatistical approach for quantification of contaminant mass discharge uncertainty using multilevel sampler measurements

    NASA Astrophysics Data System (ADS)

    Li, K. Betty; Goovaerts, Pierre; Abriola, Linda M.

    2007-06-01

    Contaminant mass discharge across a control plane downstream of a dense nonaqueous phase liquid (DNAPL) source zone has great potential to serve as a metric for the assessment of the effectiveness of source zone treatment technologies and for the development of risk-based source-plume remediation strategies. However, too often the uncertainty of mass discharge estimated in the field is not accounted for in the analysis. In this paper, a geostatistical approach is proposed to estimate mass discharge and to quantify its associated uncertainty using multilevel transect measurements of contaminant concentration (C) and hydraulic conductivity (K). The approach adapts the p-field simulation algorithm to propagate and upscale the uncertainty of mass discharge from the local uncertainty models of C and K. Application of this methodology to numerically simulated transects shows that, with a regular sampling pattern, geostatistics can provide an accurate model of uncertainty for the transects that are associated with low levels of source mass removal (i.e., transects that have a large percentage of contaminated area). For high levels of mass removal (i.e., transects with a few hot spots and large areas of near-zero concentration), a total sampling area equivalent to 6˜7% of the transect is required to achieve accurate uncertainty modeling. A comparison of the results for different measurement supports indicates that samples taken with longer screen lengths may lead to less accurate models of mass discharge uncertainty. The quantification of mass discharge uncertainty, in the form of a probability distribution, will facilitate risk assessment associated with various remediation strategies.

  12. Uncertainty quantification for environmental models

    USGS Publications Warehouse

    Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming

    2012-01-01

    Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10]. There are also bootstrapping and cross-validation approaches.Sometimes analyses are conducted using surrogate models [12]. The availability of so many options can be confusing. Categorizing methods based on fundamental questions assists in communicating the essential results of uncertainty analyses to stakeholders. Such questions can focus on model adequacy (e.g., How well does the model reproduce observed system characteristics and dynamics?) and sensitivity analysis (e.g., What parameters can be estimated with available data? What observations are important to parameters and predictions? What parameters are important to predictions?), as well as on the uncertainty quantification (e.g., How accurate and precise are the predictions?). The methods can also be classified by the number of model runs required: few (10s to 1000s) or many (10,000s to 1,000,000s). Of the methods listed above, the most computationally frugal are generally those based on local derivatives; MCMC methods tend to be among the most computationally demanding. Surrogate models (emulators)do not necessarily produce computational frugality because many runs of the full model are generally needed to create a meaningful surrogate model. With this categorization, we can, in general, address all the fundamental questions mentioned above using either computationally frugal or demanding methods. Model development and analysis can thus be conducted consistently using either computation-ally frugal or demanding methods; alternatively, different fundamental questions can be addressed using methods that require different levels of effort. Based on this perspective, we pose the question: Can computationally frugal methods be useful companions to computationally demanding meth-ods? The reliability of computationally frugal methods generally depends on the model being reasonably linear, which usually means smooth nonlin-earities and the assumption of Gaussian errors; both tend to be more valid with more linear

  13. Computational solution verification and validation applied to a thermal model of a ruggedized instrumentation package

    DOE PAGES

    Scott, Sarah Nicole; Templeton, Jeremy Alan; Hough, Patricia Diane; ...

    2014-01-01

    This study details a methodology for quantification of errors and uncertainties of a finite element heat transfer model applied to a Ruggedized Instrumentation Package (RIP). The proposed verification and validation (V&V) process includes solution verification to examine errors associated with the code's solution techniques, and model validation to assess the model's predictive capability for quantities of interest. The model was subjected to mesh resolution and numerical parameters sensitivity studies to determine reasonable parameter values and to understand how they change the overall model response and performance criteria. To facilitate quantification of the uncertainty associated with the mesh, automatic meshing andmore » mesh refining/coarsening algorithms were created and implemented on the complex geometry of the RIP. Automated software to vary model inputs was also developed to determine the solution’s sensitivity to numerical and physical parameters. The model was compared with an experiment to demonstrate its accuracy and determine the importance of both modelled and unmodelled physics in quantifying the results' uncertainty. An emphasis is placed on automating the V&V process to enable uncertainty quantification within tight development schedules.« less

  14. Uncertainty Assessment of Synthetic Design Hydrographs for Gauged and Ungauged Catchments

    NASA Astrophysics Data System (ADS)

    Brunner, Manuela I.; Sikorska, Anna E.; Furrer, Reinhard; Favre, Anne-Catherine

    2018-03-01

    Design hydrographs described by peak discharge, hydrograph volume, and hydrograph shape are essential for engineering tasks involving storage. Such design hydrographs are inherently uncertain as are classical flood estimates focusing on peak discharge only. Various sources of uncertainty contribute to the total uncertainty of synthetic design hydrographs for gauged and ungauged catchments. These comprise model uncertainties, sampling uncertainty, and uncertainty due to the choice of a regionalization method. A quantification of the uncertainties associated with flood estimates is essential for reliable decision making and allows for the identification of important uncertainty sources. We therefore propose an uncertainty assessment framework for the quantification of the uncertainty associated with synthetic design hydrographs. The framework is based on bootstrap simulations and consists of three levels of complexity. On the first level, we assess the uncertainty due to individual uncertainty sources. On the second level, we quantify the total uncertainty of design hydrographs for gauged catchments and the total uncertainty of regionalizing them to ungauged catchments but independently from the construction uncertainty. On the third level, we assess the coupled uncertainty of synthetic design hydrographs in ungauged catchments, jointly considering construction and regionalization uncertainty. We find that the most important sources of uncertainty in design hydrograph construction are the record length and the choice of the flood sampling strategy. The total uncertainty of design hydrographs in ungauged catchments depends on the catchment properties and is not negligible in our case.

  15. Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML)

    PubMed Central

    Lechevalier, D.; Ak, R.; Ferguson, M.; Law, K. H.; Lee, Y.-T. T.; Rachuri, S.

    2017-01-01

    This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain. PMID:29202125

  16. Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML).

    PubMed

    Park, J; Lechevalier, D; Ak, R; Ferguson, M; Law, K H; Lee, Y-T T; Rachuri, S

    2017-01-01

    This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain.

  17. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    NASA Technical Reports Server (NTRS)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  18. Final Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef; Conrad, Patrick; Bigoni, Daniele

    QUEST (\\url{www.quest-scidac.org}) is a SciDAC Institute that is focused on uncertainty quantification (UQ) in large-scale scientific computations. Our goals are to (1) advance the state of the art in UQ mathematics, algorithms, and software; and (2) provide modeling, algorithmic, and general UQ expertise, together with software tools, to other SciDAC projects, thereby enabling and guiding a broad range of UQ activities in their respective contexts. QUEST is a collaboration among six institutions (Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University) with a historymore » of joint UQ research. Our vision encompasses all aspects of UQ in leadership-class computing. This includes the well-founded setup of UQ problems; characterization of the input space given available data/information; local and global sensitivity analysis; adaptive dimensionality and order reduction; forward and inverse propagation of uncertainty; handling of application code failures, missing data, and hardware/software fault tolerance; and model inadequacy, comparison, validation, selection, and averaging. The nature of the UQ problem requires the seamless combination of data, models, and information across this landscape in a manner that provides a self-consistent quantification of requisite uncertainties in predictions from computational models. Accordingly, our UQ methods and tools span an interdisciplinary space across applied math, information theory, and statistics. The MIT QUEST effort centers on statistical inference and methods for surrogate or reduced-order modeling. MIT personnel have been responsible for the development of adaptive sampling methods, methods for approximating computationally intensive models, and software for both forward uncertainty propagation and statistical inverse problems. A key software product of the MIT QUEST effort is the MIT Uncertainty Quantification library, called MUQ (\\url{muq.mit.edu}).« less

  19. Parameter-induced uncertainty quantification of a regional N2O and NO3 inventory using the biogeochemical model LandscapeDNDC

    NASA Astrophysics Data System (ADS)

    Haas, Edwin; Klatt, Steffen; Kraus, David; Werner, Christian; Ruiz, Ignacio Santa Barbara; Kiese, Ralf; Butterbach-Bahl, Klaus

    2014-05-01

    Numerical simulation models are increasingly used to estimate greenhouse gas emissions at site to regional and national scales and are outlined as the most advanced methodology (Tier 3) for national emission inventory in the framework of UNFCCC reporting. Process-based models incorporate the major processes of the carbon and nitrogen cycle of terrestrial ecosystems like arable land and grasslands and are thus thought to be widely applicable at various spatial and temporal scales. The high complexity of ecosystem processes mirrored by such models requires a large number of model parameters. Many of those parameters are lumped parameters describing simultaneously the effect of environmental drivers on e.g. microbial community activity and individual processes. Thus, the precise quantification of true parameter states is often difficult or even impossible. As a result model uncertainty is not solely originating from input uncertainty but also subject to parameter-induced uncertainty. In this study we quantify regional parameter-induced model uncertainty on nitrous oxide (N2O) emissions and nitrate (NO3) leaching from arable soils of Saxony (Germany) using the biogeochemical model LandscapeDNDC. For this we calculate a regional inventory using a joint parameter distribution for key parameters describing microbial C and N turnover processes as obtained by a Bayesian calibration study. We representatively sampled 400 different parameter vectors from the discrete joint parameter distribution comprising approximately 400,000 parameter combinations and used these to calculate 400 individual realizations of the regional inventory. The spatial domain (represented by 4042 polygons) is set up with spatially explicit soil and climate information and a region-typical 3-year crop rotation consisting of winter wheat, rape- seed, and winter barley. Average N2O emission from arable soils in the state of Saxony across all 400 realizations was 1.43 ± 1.25 [kg N / ha] with a median value of 1.05 [kg N / ha]. Using the default IPCC emission factor approach (Tier 1) for direct emissions reveal a higher average N2O emission of 1.51 [kg N / ha] due to fertilizer use. In the regional uncertainty quantification the 20% likelihood range for N2O emissions is 0.79 - 1.37 [kg N / ha] (50% likelihood: 0.46 - 2.05 [kg N / ha]; 90% likelihood: 0.11 - 4.03 [kg N / ha]). Respective quantities were calculated for nitrate leaching. The method has proven its applicability to quantify parameter-induced uncertainty of simulated regional greenhouse gas emission and nitrate leaching inventories using process based biogeochemical models.

  20. Itô-SDE MCMC method for Bayesian characterization of errors associated with data limitations in stochastic expansion methods for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Arnst, M.; Abello Álvarez, B.; Ponthot, J.-P.; Boman, R.

    2017-11-01

    This paper is concerned with the characterization and the propagation of errors associated with data limitations in polynomial-chaos-based stochastic methods for uncertainty quantification. Such an issue can arise in uncertainty quantification when only a limited amount of data is available. When the available information does not suffice to accurately determine the probability distributions that must be assigned to the uncertain variables, the Bayesian method for assigning these probability distributions becomes attractive because it allows the stochastic model to account explicitly for insufficiency of the available information. In previous work, such applications of the Bayesian method had already been implemented by using the Metropolis-Hastings and Gibbs Markov Chain Monte Carlo (MCMC) methods. In this paper, we present an alternative implementation, which uses an alternative MCMC method built around an Itô stochastic differential equation (SDE) that is ergodic for the Bayesian posterior. We draw together from the mathematics literature a number of formal properties of this Itô SDE that lend support to its use in the implementation of the Bayesian method, and we describe its discretization, including the choice of the free parameters, by using the implicit Euler method. We demonstrate the proposed methodology on a problem of uncertainty quantification in a complex nonlinear engineering application relevant to metal forming.

  1. Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat

    2016-01-01

    The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.

  2. Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghanem, Roger

    QUEST was a SciDAC Institute comprising Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, the Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University. The mission of QUEST is to: (1) develop a broad class of uncertainty quantification (UQ) methods/tools, and (2) provide UQ expertise and software to other SciDAC projects, thereby enabling/guiding their UQ activities. The USC effort centered on the development of reduced models and efficient algorithms for implementing various components of the UQ pipeline. USC personnel were responsible for the development of adaptive bases, adaptive quadrature, and reduced modelsmore » to be used in estimation and inference.« less

  3. Uncertainty quantification for PZT bimorph actuators

    NASA Astrophysics Data System (ADS)

    Bravo, Nikolas; Smith, Ralph C.; Crews, John

    2018-03-01

    In this paper, we discuss the development of a high fidelity model for a PZT bimorph actuator used for micro-air vehicles, which includes the Robobee. We developed a high-fidelity model for the actuator using the homogenized energy model (HEM) framework, which quantifies the nonlinear, hysteretic, and rate-dependent behavior inherent to PZT in dynamic operating regimes. We then discussed an inverse problem on the model. We included local and global sensitivity analysis of the parameters in the high-fidelity model. Finally, we will discuss the results of Bayesian inference and uncertainty quantification on the HEM.

  4. Assessment of model behavior and acceptable forcing data uncertainty in the context of land surface soil moisture estimation

    NASA Astrophysics Data System (ADS)

    Dumedah, Gift; Walker, Jeffrey P.

    2017-03-01

    The sources of uncertainty in land surface models are numerous and varied, from inaccuracies in forcing data to uncertainties in model structure and parameterizations. Majority of these uncertainties are strongly tied to the overall makeup of the model, but the input forcing data set is independent with its accuracy usually defined by the monitoring or the observation system. The impact of input forcing data on model estimation accuracy has been collectively acknowledged to be significant, yet its quantification and the level of uncertainty that is acceptable in the context of the land surface model to obtain a competitive estimation remain mostly unknown. A better understanding is needed about how models respond to input forcing data and what changes in these forcing variables can be accommodated without deteriorating optimal estimation of the model. As a result, this study determines the level of forcing data uncertainty that is acceptable in the Joint UK Land Environment Simulator (JULES) to competitively estimate soil moisture in the Yanco area in south eastern Australia. The study employs hydro genomic mapping to examine the temporal evolution of model decision variables from an archive of values obtained from soil moisture data assimilation. The data assimilation (DA) was undertaken using the advanced Evolutionary Data Assimilation. Our findings show that the input forcing data have significant impact on model output, 35% in root mean square error (RMSE) for 5cm depth of soil moisture and 15% in RMSE for 15cm depth of soil moisture. This specific quantification is crucial to illustrate the significance of input forcing data spread. The acceptable uncertainty determined based on dominant pathway has been validated and shown to be reliable for all forcing variables, so as to provide optimal soil moisture. These findings are crucial for DA in order to account for uncertainties that are meaningful from the model standpoint. Moreover, our results point to a proper treatment of input forcing data in general land surface and hydrological model estimation.

  5. Hierarchical multi-scale approach to validation and uncertainty quantification of hyper-spectral image modeling

    NASA Astrophysics Data System (ADS)

    Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.

    2016-05-01

    Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.

  6. Assessment and Reduction of Model Parametric Uncertainties: A Case Study with A Distributed Hydrological Model

    NASA Astrophysics Data System (ADS)

    Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.

    2017-12-01

    The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40-85% reduction in 1-NSE, and 35-90% reduction in |RB|. Overall, this uncertainty quantification framework is robust, effective and efficient for parametric uncertainty analysis, the results of which provide useful information that helps to understand the model behaviors and improve the model simulations.

  7. From Quantification to Visualization: A Taxonomy of Uncertainty Visualization Approaches

    PubMed Central

    Potter, Kristin; Rosen, Paul; Johnson, Chris R.

    2014-01-01

    Quantifying uncertainty is an increasingly important topic across many domains. The uncertainties present in data come with many diverse representations having originated from a wide variety of disciplines. Communicating these uncertainties is a task often left to visualization without clear connection between the quantification and visualization. In this paper, we first identify frequently occurring types of uncertainty. Second, we connect those uncertainty representations to ones commonly used in visualization. We then look at various approaches to visualizing this uncertainty by partitioning the work based on the dimensionality of the data and the dimensionality of the uncertainty. We also discuss noteworthy exceptions to our taxonomy along with future research directions for the uncertainty visualization community. PMID:25663949

  8. Validation of Heat Transfer Thermal Decomposition and Container Pressurization of Polyurethane Foam.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Sarah Nicole; Dodd, Amanda B.; Larsen, Marvin E.

    Polymer foam encapsulants provide mechanical, electrical, and thermal isolation in engineered systems. In fire environments, gas pressure from thermal decomposition of polymers can cause mechanical failure of sealed systems. In this work, a detailed uncertainty quantification study of PMDI-based polyurethane foam is presented to assess the validity of the computational model. Both experimental measurement uncertainty and model prediction uncertainty are examined and compared. Both the mean value method and Latin hypercube sampling approach are used to propagate the uncertainty through the model. In addition to comparing computational and experimental results, the importance of each input parameter on the simulation resultmore » is also investigated. These results show that further development in the physics model of the foam and appropriate associated material testing are necessary to improve model accuracy.« less

  9. A Two-Step Approach to Uncertainty Quantification of Core Simulators

    DOE PAGES

    Yankov, Artem; Collins, Benjamin; Klein, Markus; ...

    2012-01-01

    For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less

  10. Aeras: A next generation global atmosphere model

    DOE PAGES

    Spotz, William F.; Smith, Thomas M.; Demeshko, Irina P.; ...

    2015-06-01

    Sandia National Laboratories is developing a new global atmosphere model named Aeras that is performance portable and supports the quantification of uncertainties. These next-generation capabilities are enabled by building Aeras on top of Albany, a code base that supports the rapid development of scientific application codes while leveraging Sandia's foundational mathematics and computer science packages in Trilinos and Dakota. Embedded uncertainty quantification (UQ) is an original design capability of Albany, and performance portability is a recent upgrade. Other required features, such as shell-type elements, spectral elements, efficient explicit and semi-implicit time-stepping, transient sensitivity analysis, and concurrent ensembles, were not componentsmore » of Albany as the project began, and have been (or are being) added by the Aeras team. We present early UQ and performance portability results for the shallow water equations.« less

  11. Using Uncertainty Quantification to Guide Development and Improvements of a Regional-Scale Model of the Coastal Lowlands Aquifer System Spanning Texas, Louisiana, Mississippi, Alabama and Florida

    NASA Astrophysics Data System (ADS)

    Foster, L. K.; Clark, B. R.; Duncan, L. L.; Tebo, D. T.; White, J.

    2017-12-01

    Several historical groundwater models exist within the Coastal Lowlands Aquifer System (CLAS), which spans the Gulf Coastal Plain in Texas, Louisiana, Mississippi, Alabama, and Florida. The largest of these models, called the Gulf Coast Regional Aquifer System Analysis (RASA) model, has been brought into a new framework using the Newton formulation for MODFLOW-2005 (MODFLOW-NWT) and serves as the starting point of a new investigation underway by the U.S. Geological Survey to improve understanding of the CLAS and provide predictions of future groundwater availability within an uncertainty quantification (UQ) framework. The use of an UQ framework will not only provide estimates of water-level observation worth, hydraulic parameter uncertainty, boundary-condition uncertainty, and uncertainty of future potential predictions, but it will also guide the model development process. Traditionally, model development proceeds from dataset construction to the process of deterministic history matching, followed by deterministic predictions using the model. This investigation will combine the use of UQ with existing historical models of the study area to assess in a quantitative framework the effect model package and property improvements have on the ability to represent past-system states, as well as the effect on the model's ability to make certain predictions of water levels, water budgets, and base-flow estimates. Estimates of hydraulic property information and boundary conditions from the existing models and literature, forming the prior, will be used to make initial estimates of model forecasts and their corresponding uncertainty, along with an uncalibrated groundwater model run within an unconstrained Monte Carlo analysis. First-Order Second-Moment (FOSM) analysis will also be used to investigate parameter and predictive uncertainty, and guide next steps in model development prior to rigorous history matching by using PEST++ parameter estimation code.

  12. Effects of correlated parameters and uncertainty in electronic-structure-based chemical kinetic modelling

    NASA Astrophysics Data System (ADS)

    Sutton, Jonathan E.; Guo, Wei; Katsoulakis, Markos A.; Vlachos, Dionisios G.

    2016-04-01

    Kinetic models based on first principles are becoming common place in heterogeneous catalysis because of their ability to interpret experimental data, identify the rate-controlling step, guide experiments and predict novel materials. To overcome the tremendous computational cost of estimating parameters of complex networks on metal catalysts, approximate quantum mechanical calculations are employed that render models potentially inaccurate. Here, by introducing correlative global sensitivity analysis and uncertainty quantification, we show that neglecting correlations in the energies of species and reactions can lead to an incorrect identification of influential parameters and key reaction intermediates and reactions. We rationalize why models often underpredict reaction rates and show that, despite the uncertainty being large, the method can, in conjunction with experimental data, identify influential missing reaction pathways and provide insights into the catalyst active site and the kinetic reliability of a model. The method is demonstrated in ethanol steam reforming for hydrogen production for fuel cells.

  13. Quantification of Dynamic Model Validation Metrics Using Uncertainty Propagation from Requirements

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Peck, Jeffrey A.; Stewart, Eric C.

    2018-01-01

    The Space Launch System, NASA's new large launch vehicle for long range space exploration, is presently in the final design and construction phases, with the first launch scheduled for 2019. A dynamic model of the system has been created and is critical for calculation of interface loads and natural frequencies and mode shapes for guidance, navigation, and control (GNC). Because of the program and schedule constraints, a single modal test of the SLS will be performed while bolted down to the Mobile Launch Pad just before the first launch. A Monte Carlo and optimization scheme will be performed to create thousands of possible models based on given dispersions in model properties and to determine which model best fits the natural frequencies and mode shapes from modal test. However, the question still remains as to whether this model is acceptable for the loads and GNC requirements. An uncertainty propagation and quantification (UP and UQ) technique to develop a quantitative set of validation metrics that is based on the flight requirements has therefore been developed and is discussed in this paper. There has been considerable research on UQ and UP and validation in the literature, but very little on propagating the uncertainties from requirements, so most validation metrics are "rules-of-thumb;" this research seeks to come up with more reason-based metrics. One of the main assumptions used to achieve this task is that the uncertainty in the modeling of the fixed boundary condition is accurate, so therefore that same uncertainty can be used in propagating the fixed-test configuration to the free-free actual configuration. The second main technique applied here is the usage of the limit-state formulation to quantify the final probabilistic parameters and to compare them with the requirements. These techniques are explored with a simple lumped spring-mass system and a simplified SLS model. When completed, it is anticipated that this requirements-based validation metric will provide a quantified confidence and probability of success for the final SLS dynamics model, which will be critical for a successful launch program, and can be applied in the many other industries where an accurate dynamic model is required.

  14. Experimental validation of 2D uncertainty quantification for DIC.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reu, Phillip L.

    Because digital image correlation (DIC) has become such an important and standard tool in the toolbox of experimental mechanicists, a complete uncertainty quantification of the method is needed. It should be remembered that each DIC setup and series of images will have a unique uncertainty based on the calibration quality and the image and speckle quality of the analyzed images. Any pretest work done with a calibrated DIC stereo-rig to quantify the errors using known shapes and translations, while useful, do not necessarily reveal the uncertainty of a later test. This is particularly true with high-speed applications where actual testmore » images are often less than ideal. Work has previously been completed on the mathematical underpinnings of DIC uncertainty quantification and is already published, this paper will present corresponding experimental work used to check the validity of the uncertainty equations.« less

  15. Experimental validation of 2D uncertainty quantification for digital image correlation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reu, Phillip L.

    Because digital image correlation (DIC) has become such an important and standard tool in the toolbox of experimental mechanicists, a complete uncertainty quantification of the method is needed. It should be remembered that each DIC setup and series of images will have a unique uncertainty based on the calibration quality and the image and speckle quality of the analyzed images. Any pretest work done with a calibrated DIC stereo-rig to quantify the errors using known shapes and translations, while useful, do not necessarily reveal the uncertainty of a later test. This is particularly true with high-speed applications where actual testmore » images are often less than ideal. Work has previously been completed on the mathematical underpinnings of DIC uncertainty quantification and is already published, this paper will present corresponding experimental work used to check the validity of the uncertainty equations.« less

  16. Uncertainty Quantification and Regional Sensitivity Analysis of Snow-related Parameters in the Canadian LAnd Surface Scheme (CLASS)

    NASA Astrophysics Data System (ADS)

    Badawy, B.; Fletcher, C. G.

    2017-12-01

    The parameterization of snow processes in land surface models is an important source of uncertainty in climate simulations. Quantifying the importance of snow-related parameters, and their uncertainties, may therefore lead to better understanding and quantification of uncertainty within integrated earth system models. However, quantifying the uncertainty arising from parameterized snow processes is challenging due to the high-dimensional parameter space, poor observational constraints, and parameter interaction. In this study, we investigate the sensitivity of the land simulation to uncertainty in snow microphysical parameters in the Canadian LAnd Surface Scheme (CLASS) using an uncertainty quantification (UQ) approach. A set of training cases (n=400) from CLASS is used to sample each parameter across its full range of empirical uncertainty, as determined from available observations and expert elicitation. A statistical learning model using support vector regression (SVR) is then constructed from the training data (CLASS output variables) to efficiently emulate the dynamical CLASS simulations over a much larger (n=220) set of cases. This approach is used to constrain the plausible range for each parameter using a skill score, and to identify the parameters with largest influence on the land simulation in CLASS at global and regional scales, using a random forest (RF) permutation importance algorithm. Preliminary sensitivity tests indicate that snow albedo refreshment threshold and the limiting snow depth, below which bare patches begin to appear, have the highest impact on snow output variables. The results also show a considerable reduction of the plausible ranges of the parameters values and hence reducing their uncertainty ranges, which can lead to a significant reduction of the model uncertainty. The implementation and results of this study will be presented and discussed in details.

  17. USACM Thematic Workshop On Uncertainty Quantification And Data-Driven Modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, James R.

    The USACM Thematic Workshop on Uncertainty Quantification and Data-Driven Modeling was held on March 23-24, 2017, in Austin, TX. The organizers of the technical program were James R. Stewart of Sandia National Laboratories and Krishna Garikipati of University of Michigan. The administrative organizer was Ruth Hengst, who serves as Program Coordinator for the USACM. The organization of this workshop was coordinated through the USACM Technical Thrust Area on Uncertainty Quantification and Probabilistic Analysis. The workshop website (http://uqpm2017.usacm.org) includes the presentation agenda as well as links to several of the presentation slides (permission to access the presentations was granted by eachmore » of those speakers, respectively). Herein, this final report contains the complete workshop program that includes the presentation agenda, the presentation abstracts, and the list of posters.« less

  18. Atomic Radius and Charge Parameter Uncertainty in Biomolecular Solvation Energy Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Gao, Peiyuan

    Atomic radii and charges are two major parameters used in implicit solvent electrostatics and energy calculations. The optimization problem for charges and radii is under-determined, leading to uncertainty in the values of these parameters and in the results of solvation energy calculations using these parameters. This paper presents a method for quantifying this uncertainty in solvation energies using surrogate models based on generalized polynomial chaos (gPC) expansions. There are relatively few atom types used to specify radii parameters in implicit solvation calculations; therefore, surrogate models for these low-dimensional spaces could be constructed using least-squares fitting. However, there are many moremore » types of atomic charges; therefore, construction of surrogate models for the charge parameter space required compressed sensing combined with an iterative rotation method to enhance problem sparsity. We present results for the uncertainty in small molecule solvation energies based on these approaches. Additionally, we explore the correlation between uncertainties due to radii and charges which motivates the need for future work in uncertainty quantification methods for high-dimensional parameter spaces.« less

  19. Uncertainty quantification for nuclear density functional theory and information content of new measurements.

    PubMed

    McDonnell, J D; Schunck, N; Higdon, D; Sarich, J; Wild, S M; Nazarewicz, W

    2015-03-27

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. The example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.

  20. Hierarchical Multi-Scale Approach To Validation and Uncertainty Quantification of Hyper-Spectral Image Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.

    Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensormore » level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.« less

  1. Estimation and impact assessment of input and parameter uncertainty in predicting groundwater flow with a fully distributed model

    NASA Astrophysics Data System (ADS)

    Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke

    2017-04-01

    Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kress, Joel David

    The development and scale up of cost effective carbon capture processes is of paramount importance to enable the widespread deployment of these technologies to significantly reduce greenhouse gas emissions. The U.S. Department of Energy initiated the Carbon Capture Simulation Initiative (CCSI) in 2011 with the goal of developing a computational toolset that would enable industry to more effectively identify, design, scale up, operate, and optimize promising concepts. The first half of the presentation will introduce the CCSI Toolset consisting of basic data submodels, steady-state and dynamic process models, process optimization and uncertainty quantification tools, an advanced dynamic process control framework,more » and high-resolution filtered computationalfluid- dynamics (CFD) submodels. The second half of the presentation will describe a high-fidelity model of a mesoporous silica supported, polyethylenimine (PEI)-impregnated solid sorbent for CO 2 capture. The sorbent model includes a detailed treatment of transport and amine-CO 2- H 2O interactions based on quantum chemistry calculations. Using a Bayesian approach for uncertainty quantification, we calibrate the sorbent model to Thermogravimetric (TGA) data.« less

  3. Bayesian Treatment of Uncertainty in Environmental Modeling: Optimization, Sampling and Data Assimilation Using the DREAM Software Package

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2012-12-01

    In the past decade much progress has been made in the treatment of uncertainty in earth systems modeling. Whereas initial approaches has focused mostly on quantification of parameter and predictive uncertainty, recent methods attempt to disentangle the effects of parameter, forcing (input) data, model structural and calibration data errors. In this talk I will highlight some of our recent work involving theory, concepts and applications of Bayesian parameter and/or state estimation. In particular, new methods for sequential Monte Carlo (SMC) and Markov Chain Monte Carlo (MCMC) simulation will be presented with emphasis on massively parallel distributed computing and quantification of model structural errors. The theoretical and numerical developments will be illustrated using model-data synthesis problems in hydrology, hydrogeology and geophysics.

  4. Quantification of model uncertainty in aerosol optical thickness retrieval from Ozone Monitoring Instrument (OMI) measurements

    NASA Astrophysics Data System (ADS)

    Määttä, A.; Laine, M.; Tamminen, J.; Veefkind, J. P.

    2013-09-01

    We study uncertainty quantification in remote sensing of aerosols in the atmosphere with top of the atmosphere reflectance measurements from the nadir-viewing Ozone Monitoring Instrument (OMI). Focus is on the uncertainty in aerosol model selection of pre-calculated aerosol models and on the statistical modelling of the model inadequacies. The aim is to apply statistical methodologies that improve the uncertainty estimates of the aerosol optical thickness (AOT) retrieval by propagating model selection and model error related uncertainties more realistically. We utilise Bayesian model selection and model averaging methods for the model selection problem and use Gaussian processes to model the smooth systematic discrepancies from the modelled to observed reflectance. The systematic model error is learned from an ensemble of operational retrievals. The operational OMI multi-wavelength aerosol retrieval algorithm OMAERO is used for cloud free, over land pixels of the OMI instrument with the additional Bayesian model selection and model discrepancy techniques. The method is demonstrated with four examples with different aerosol properties: weakly absorbing aerosols, forest fires over Greece and Russia, and Sahara dessert dust. The presented statistical methodology is general; it is not restricted to this particular satellite retrieval application.

  5. Community Project for Accelerator Science and Simulation (ComPASS) Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cary, John R.; Cowan, Benjamin M.; Veitzer, S. A.

    2016-03-04

    Tech-X participated across the full range of ComPASS activities, with efforts in the Energy Frontier primarily through modeling of laser plasma accelerators and dielectric laser acceleration, in the Intensity Frontier primarily through electron cloud modeling, and in Uncertainty Quantification being applied to dielectric laser acceleration. In the following we present the progress and status of our activities for the entire period of the ComPASS project for the different areas of Energy Frontier, Intensity Frontier and Uncertainty Quantification.

  6. Climate impacts on palm oil yields in the Nigerian Niger Delta

    NASA Astrophysics Data System (ADS)

    Okoro, Stanley U.; Schickhoff, Udo; Boehner, Juergen; Schneider, Uwe A.; Huth, Neil

    2016-04-01

    Palm oil production has increased in recent decades and is estimated to increase further. The optimal role of palm oil production, however, is controversial because of resource conflicts with alternative land uses. Local conditions and climate change affect resource competition and the desirability of palm oil production. Based on this, crop yield simulations using different climate model output under different climate scenarios could be important tool in addressing the problem of uncertainty quantification among different climate model outputs. Previous studies on this region have focused mostly on single experimental fields, not considering variations in Agro-Ecological Zones, climatic conditions, varieties and management practices and, in most cases not extending to various IPCC climate scenarios and were mostly based on single climate model output. Furthermore, the uncertainty quantification of the climate- impact model has rarely been investigated on this region. To this end we use the biophysical simulation model APSIM (Agricultural Production Systems Simulator) to simulate the regional climate impact on oil palm yield over the Nigerian Niger Delta. We also examine whether the use of crop yield model output ensemble reduces the uncertainty rather than the use of climate model output ensemble. The results could serve as a baseline for policy makers in this region in understanding the interaction between potentials of energy crop production of the region as well as its food security and other negative feedbacks that could be associated with bioenergy from oil palm. Keywords: Climate Change, Climate impacts, Land use and Crop yields.

  7. Uncertainty Quantification of Medium-Term Heat Storage From Short-Term Geophysical Experiments Using Bayesian Evidential Learning

    NASA Astrophysics Data System (ADS)

    Hermans, Thomas; Nguyen, Frédéric; Klepikova, Maria; Dassargues, Alain; Caers, Jef

    2018-04-01

    In theory, aquifer thermal energy storage (ATES) systems can recover in winter the heat stored in the aquifer during summer to increase the energy efficiency of the system. In practice, the energy efficiency is often lower than expected from simulations due to spatial heterogeneity of hydraulic properties or non-favorable hydrogeological conditions. A proper design of ATES systems should therefore consider the uncertainty of the prediction related to those parameters. We use a novel framework called Bayesian Evidential Learning (BEL) to estimate the heat storage capacity of an alluvial aquifer using a heat tracing experiment. BEL is based on two main stages: pre- and postfield data acquisition. Before data acquisition, Monte Carlo simulations and global sensitivity analysis are used to assess the information content of the data to reduce the uncertainty of the prediction. After data acquisition, prior falsification and machine learning based on the same Monte Carlo are used to directly assess uncertainty on key prediction variables from observations. The result is a full quantification of the posterior distribution of the prediction conditioned to observed data, without any explicit full model inversion. We demonstrate the methodology in field conditions and validate the framework using independent measurements.

  8. Interval-based reconstruction for uncertainty quantification in PET

    NASA Astrophysics Data System (ADS)

    Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis

    2018-02-01

    A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.

  9. Kinematic source inversions of teleseismic data based on the QUESO library for uncertainty quantification and prediction

    NASA Astrophysics Data System (ADS)

    Zielke, O.; McDougall, D.; Mai, P. M.; Babuska, I.

    2014-12-01

    One fundamental aspect of seismic hazard mitigation is gaining a better understanding of the rupture process. Because direct observation of the relevant parameters and properties is not possible, other means such as kinematic source inversions are used instead. By constraining the spatial and temporal evolution of fault slip during an earthquake, those inversion approaches may enable valuable insights in the physics of the rupture process. However, due to the underdetermined nature of this inversion problem (i.e., inverting a kinematic source model for an extended fault based on seismic data), the provided solutions are generally non-unique. Here we present a statistical (Bayesian) inversion approach based on an open-source library for uncertainty quantification (UQ) called QUESO that was developed at ICES (UT Austin). The approach has advantages with respect to deterministic inversion approaches as it provides not only a single (non-unique) solution but also provides uncertainty bounds with it. Those uncertainty bounds help to qualitatively and quantitatively judge how well constrained an inversion solution is and how much rupture complexity the data reliably resolve. The presented inversion scheme uses only tele-seismically recorded body waves but future developments may lead us towards joint inversion schemes. After giving an insight in the inversion scheme ifself (based on delayed rejection adaptive metropolis, DRAM) we explore the method's resolution potential. For that, we synthetically generate tele-seismic data, add for example different levels of noise and/or change fault plane parameterization and then apply our inversion scheme in the attempt to extract the (known) kinematic rupture model. We conclude with exemplary inverting real tele-seismic data of a recent large earthquake and compare those results with deterministically derived kinematic source models provided by other research groups.

  10. Development and Implementation of a Formal Framework for Bottom-up Uncertainty Analysis of Input Emissions: Case Study of Residential Wood Combustion

    NASA Astrophysics Data System (ADS)

    Zhao, S.; Mashayekhi, R.; Saeednooran, S.; Hakami, A.; Ménard, R.; Moran, M. D.; Zhang, J.

    2016-12-01

    We have developed a formal framework for documentation, quantification, and propagation of uncertainties in upstream emissions inventory data at various stages leading to the generation of model-ready gridded emissions through emissions processing software such as the EPA's SMOKE (Sparse Matrix Operator Kernel Emissions) system. To illustrate this framework we present a proof-of-concept case study of a bottom-up quantitative assessment of uncertainties in emissions from residential wood combustion (RWC) in the U.S. and Canada. Uncertainties associated with key inventory parameters are characterized based on existing information sources, including the American Housing Survey (AHS) from the U.S. Census Bureau, Timber Products Output (TPO) surveys from the U.S. Forest Service, TNS Canadian Facts surveys, and the AP-42 emission factor document from the U.S. EPA. The propagation of uncertainties is based on Monte Carlo simulation code external to SMOKE. Latin Hypercube Sampling (LHS) is implemented to generate a set of random realizations of each RWC inventory parameter, for which the uncertainties are assumed to be normally distributed. Random realizations are also obtained for each RWC temporal and chemical speciation profile and spatial surrogate field external to SMOKE using the LHS approach. SMOKE outputs for primary emissions (e.g., CO, VOC) using both RWC emission inventory realizations and perturbed temporal and chemical profiles and spatial surrogates show relative uncertainties of about 30-50% across the U.S. and about 70-100% across Canada. Positive skewness values (up to 2.7) and variable kurtosis values (up to 4.8) were also found. Spatial allocation contributes significantly to the overall uncertainty, particularly in Canada. By applying this framework we are able to produce random realizations of model-ready gridded emissions that along with available meteorological ensembles can be used to propagate uncertainties through chemical transport models. The approach described here provides an effective means for formal quantification of uncertainties in estimated emissions from various source sectors and for continuous documentation, assessment, and reduction of emission uncertainties.

  11. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  12. Efficient uncertainty quantification in fully-integrated surface and subsurface hydrologic simulations

    NASA Astrophysics Data System (ADS)

    Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.

    2018-01-01

    Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at 241 different times each. Numerical experiments show that polynomial chaos is an effective and robust method for quantifying uncertainty in fully-integrated hydrologic simulations, which provides a rich set of features and is computationally efficient. Our approach has the potential for significant speedup over existing sampling based methods when the number of uncertain model parameters is modest ( ≤ 20). To our knowledge, this is the first implementation of the algorithm in a comprehensive, fully-integrated, physically-based three-dimensional hydrosystem model.

  13. A python framework for environmental model uncertainty analysis

    USGS Publications Warehouse

    White, Jeremy; Fienen, Michael N.; Doherty, John E.

    2016-01-01

    We have developed pyEMU, a python framework for Environmental Modeling Uncertainty analyses, open-source tool that is non-intrusive, easy-to-use, computationally efficient, and scalable to highly-parameterized inverse problems. The framework implements several types of linear (first-order, second-moment (FOSM)) and non-linear uncertainty analyses. The FOSM-based analyses can also be completed prior to parameter estimation to help inform important modeling decisions, such as parameterization and objective function formulation. Complete workflows for several types of FOSM-based and non-linear analyses are documented in example notebooks implemented using Jupyter that are available in the online pyEMU repository. Example workflows include basic parameter and forecast analyses, data worth analyses, and error-variance analyses, as well as usage of parameter ensemble generation and management capabilities. These workflows document the necessary steps and provides insights into the results, with the goal of educating users not only in how to apply pyEMU, but also in the underlying theory of applied uncertainty quantification.

  14. Using analogues to quantify geological uncertainty in stochastic reserve modelling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, B.; Brown, I.

    1995-08-01

    The petroleum industry seeks to minimize exploration risk by employing the best possible expertise, methods and tools. Is it possible to quantify the success of this process of risk reduction? Due to inherent uncertainty in predicting geological reality and due to changing environments for hydrocarbon exploration, it is not enough simply to record the proportion of successful wells drilled; in various parts of the world it has been noted that pseudo-random drilling would apparently have been as successful as the actual drilling programme. How, then, should we judge the success of risk reduction? For many years the E&P industry hasmore » routinely used Monte Carlo modelling to generate a probability distribution for prospect reserves. One aspect of Monte Carlo modelling which has received insufficient attention, but which is essential for quantifying risk reduction, is the consistency and repeatability with which predictions can be made. Reducing the subjective element inherent in the specification of geological uncertainty allows better quantification of uncertainty in the prediction of reserves, in both exploration and appraisal. Building on work reported at the AAPG annual conventions in 1994 and 1995, the present paper incorporates analogue information with uncertainty modelling. Analogues provide a major step forward in the quantification of risk, but their significance is potentially greater still. The two principal contributors to uncertainty in field and prospect analysis are the hydrocarbon life-cycle and the geometry of the trap. These are usually treated separately. Combining them into a single model is a major contribution to the reduction risk. This work is based in part on a joint project with Oryx Energy UK Ltd., and thanks are due in particular to Richard Benmore and Mike Cooper.« less

  15. Uncertainty quantification of measured quantities for a HCCI engine: composition or temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petitpas, Guillaume; Whitesides, Russell

    UQHCCI_1 computes the measurement uncertainties of a HCCI engine test bench using the pressure trace and the estimated uncertainties of the measured quantities as inputs, then propagating them through Bayesian inference and a mixing model.

  16. On uncertainty quantification in hydrogeology and hydrogeophysics

    NASA Astrophysics Data System (ADS)

    Linde, Niklas; Ginsbourger, David; Irving, James; Nobile, Fabio; Doucet, Arnaud

    2017-12-01

    Recent advances in sensor technologies, field methodologies, numerical modeling, and inversion approaches have contributed to unprecedented imaging of hydrogeological properties and detailed predictions at multiple temporal and spatial scales. Nevertheless, imaging results and predictions will always remain imprecise, which calls for appropriate uncertainty quantification (UQ). In this paper, we outline selected methodological developments together with pioneering UQ applications in hydrogeology and hydrogeophysics. The applied mathematics and statistics literature is not easy to penetrate and this review aims at helping hydrogeologists and hydrogeophysicists to identify suitable approaches for UQ that can be applied and further developed to their specific needs. To bypass the tremendous computational costs associated with forward UQ based on full-physics simulations, we discuss proxy-modeling strategies and multi-resolution (Multi-level Monte Carlo) methods. We consider Bayesian inversion for non-linear and non-Gaussian state-space problems and discuss how Sequential Monte Carlo may become a practical alternative. We also describe strategies to account for forward modeling errors in Bayesian inversion. Finally, we consider hydrogeophysical inversion, where petrophysical uncertainty is often ignored leading to overconfident parameter estimation. The high parameter and data dimensions encountered in hydrogeological and geophysical problems make UQ a complicated and important challenge that has only been partially addressed to date.

  17. Error Quantification and Confidence Assessment of Aerothermal Model Predictions for Hypersonic Aircraft (Preprint)

    DTIC Science & Technology

    2013-09-01

    based confidence metric is used to compare several different model predictions with the experimental data. II. Aerothermal Model Definition and...whereas 5% measurement uncertainty is assumed for aerodynamic pressure and heat flux measurements 4p y and 4Q y . Bayesian updating according... definitive conclusions for these particular aerodynamic models. However, given the confidence associated with the 4 sdp predictions for Run 30 (H/D

  18. Classifying the Sizes of Explosive Eruptions using Tephra Deposits: The Advantages of a Numerical Inversion Approach

    NASA Astrophysics Data System (ADS)

    Connor, C.; Connor, L.; White, J.

    2015-12-01

    Explosive volcanic eruptions are often classified by deposit mass and eruption column height. How well are these eruption parameters determined in older deposits, and how well can we reduce uncertainty using robust numerical and statistical methods? We describe an efficient and effective inversion and uncertainty quantification approach for estimating eruption parameters given a dataset of tephra deposit thickness and granulometry. The inversion and uncertainty quantification is implemented using the open-source PEST++ code. Inversion with PEST++ can be used with a variety of forward models and here is applied using Tephra2, a code that simulates advective and dispersive tephra transport and deposition. The Levenburg-Marquardt algorithm is combined with formal Tikhonov and subspace regularization to invert eruption parameters; a linear equation for conditional uncertainty propagation is used to estimate posterior parameter uncertainty. Both the inversion and uncertainty analysis support simultaneous analysis of the full eruption and wind-field parameterization. The combined inversion/uncertainty-quantification approach is applied to the 1992 eruption of Cerro Negro (Nicaragua), the 2011 Kirishima-Shinmoedake (Japan), and the 1913 Colima (Mexico) eruptions. These examples show that although eruption mass uncertainty is reduced by inversion against tephra isomass data, considerable uncertainty remains for many eruption and wind-field parameters, such as eruption column height. Supplementing the inversion dataset with tephra granulometry data is shown to further reduce the uncertainty of most eruption and wind-field parameters. We think the use of such robust models provides a better understanding of uncertainty in eruption parameters, and hence eruption classification, than is possible with more qualitative methods that are widely used.

  19. Analysis of actuator delay and its effect on uncertainty quantification for real-time hybrid simulation

    NASA Astrophysics Data System (ADS)

    Chen, Cheng; Xu, Weijie; Guo, Tong; Chen, Kai

    2017-10-01

    Uncertainties in structure properties can result in different responses in hybrid simulations. Quantification of the effect of these uncertainties would enable researchers to estimate the variances of structural responses observed from experiments. This poses challenges for real-time hybrid simulation (RTHS) due to the existence of actuator delay. Polynomial chaos expansion (PCE) projects the model outputs on a basis of orthogonal stochastic polynomials to account for influences of model uncertainties. In this paper, PCE is utilized to evaluate effect of actuator delay on the maximum displacement from real-time hybrid simulation of a single degree of freedom (SDOF) structure when accounting for uncertainties in structural properties. The PCE is first applied for RTHS without delay to determine the order of PCE, the number of sample points as well as the method for coefficients calculation. The PCE is then applied to RTHS with actuator delay. The mean, variance and Sobol indices are compared and discussed to evaluate the effects of actuator delay on uncertainty quantification for RTHS. Results show that the mean and the variance of the maximum displacement increase linearly and exponentially with respect to actuator delay, respectively. Sensitivity analysis through Sobol indices also indicates the influence of the single random variable decreases while the coupling effect increases with the increase of actuator delay.

  20. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  1. Inverse modeling of geochemical and mechanical compaction in sedimentary basins

    NASA Astrophysics Data System (ADS)

    Colombo, Ivo; Porta, Giovanni Michele; Guadagnini, Alberto

    2015-04-01

    We study key phenomena driving the feedback between sediment compaction processes and fluid flow in stratified sedimentary basins formed through lithification of sand and clay sediments after deposition. Processes we consider are mechanic compaction of the host rock and the geochemical compaction due to quartz cementation in sandstones. Key objectives of our study include (i) the quantification of the influence of the uncertainty of the model input parameters on the model output and (ii) the application of an inverse modeling technique to field scale data. Proper accounting of the feedback between sediment compaction processes and fluid flow in the subsurface is key to quantify a wide set of environmentally and industrially relevant phenomena. These include, e.g., compaction-driven brine and/or saltwater flow at deep locations and its influence on (a) tracer concentrations observed in shallow sediments, (b) build up of fluid overpressure, (c) hydrocarbon generation and migration, (d) subsidence due to groundwater and/or hydrocarbons withdrawal, and (e) formation of ore deposits. Main processes driving the diagenesis of sediments after deposition are mechanical compaction due to overburden and precipitation/dissolution associated with reactive transport. The natural evolution of sedimentary basins is characterized by geological time scales, thus preventing direct and exhaustive measurement of the system dynamical changes. The outputs of compaction models are plagued by uncertainty because of the incomplete knowledge of the models and parameters governing diagenesis. Development of robust methodologies for inverse modeling and parameter estimation under uncertainty is therefore crucial to the quantification of natural compaction phenomena. We employ a numerical methodology based on three building blocks: (i) space-time discretization of the compaction process; (ii) representation of target output variables through a Polynomial Chaos Expansion (PCE); and (iii) model inversion (parameter estimation) within a maximum likelihood framework. In this context, the PCE-based surrogate model enables one to (i) minimize the computational cost associated with the (forward and inverse) modeling procedures leading to uncertainty quantification and parameter estimation, and (ii) compute the full set of Sobol indices quantifying the contribution of each uncertain parameter to the variability of target state variables. Results are illustrated through the simulation of one-dimensional test cases. The analyses focuses on the calibration of model parameters through literature field cases. The quality of parameter estimates is then analyzed as a function of number, type and location of data.

  2. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    NASA Astrophysics Data System (ADS)

    Schöbi, Roland; Sudret, Bruno

    2017-06-01

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions to surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.

  3. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schöbi, Roland, E-mail: schoebi@ibk.baug.ethz.ch; Sudret, Bruno, E-mail: sudret@ibk.baug.ethz.ch

    2017-06-15

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions tomore » surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.« less

  4. Uncertainty Quantification in High Throughput Screening ...

    EPA Pesticide Factsheets

    Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of biochemical and cellular processes, including endocrine disruption, cytotoxicity, and zebrafish development. Over 2.6 million concentration response curves are fit to models to extract parameters related to potency and efficacy. Models built on ToxCast results are being used to rank and prioritize the toxicological risk of tested chemicals and to predict the toxicity of tens of thousands of chemicals not yet tested in vivo. However, the data size also presents challenges. When fitting the data, the choice of models, model selection strategy, and hit call criteria must reflect the need for computational efficiency and robustness, requiring hard and somewhat arbitrary cutoffs. When coupled with unavoidable noise in the experimental concentration response data, these hard cutoffs cause uncertainty in model parameters and the hit call itself. The uncertainty will then propagate through all of the models built on the data. Left unquantified, this uncertainty makes it difficult to fully interpret the data for risk assessment. We used bootstrap resampling methods to quantify the uncertainty in fitting models to the concentration response data. Bootstrap resampling determines confidence intervals for

  5. Adaptive polynomial chaos techniques for uncertainty quantification of a gas cooled fast reactor transient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perko, Z.; Gilli, L.; Lathouwers, D.

    2013-07-01

    Uncertainty quantification plays an increasingly important role in the nuclear community, especially with the rise of Best Estimate Plus Uncertainty methodologies. Sensitivity analysis, surrogate models, Monte Carlo sampling and several other techniques can be used to propagate input uncertainties. In recent years however polynomial chaos expansion has become a popular alternative providing high accuracy at affordable computational cost. This paper presents such polynomial chaos (PC) methods using adaptive sparse grids and adaptive basis set construction, together with an application to a Gas Cooled Fast Reactor transient. Comparison is made between a new sparse grid algorithm and the traditionally used techniquemore » proposed by Gerstner. An adaptive basis construction method is also introduced and is proved to be advantageous both from an accuracy and a computational point of view. As a demonstration the uncertainty quantification of a 50% loss of flow transient in the GFR2400 Gas Cooled Fast Reactor design was performed using the CATHARE code system. The results are compared to direct Monte Carlo sampling and show the superior convergence and high accuracy of the polynomial chaos expansion. Since PC techniques are easy to implement, they can offer an attractive alternative to traditional techniques for the uncertainty quantification of large scale problems. (authors)« less

  6. Uncertainty, ensembles and air quality dispersion modeling: applications and challenges

    NASA Astrophysics Data System (ADS)

    Dabberdt, Walter F.; Miller, Erik

    The past two decades have seen significant advances in mesoscale meteorological modeling research and applications, such as the development of sophisticated and now widely used advanced mesoscale prognostic models, large eddy simulation models, four-dimensional data assimilation, adjoint models, adaptive and targeted observational strategies, and ensemble and probabilistic forecasts. Some of these advances are now being applied to urban air quality modeling and applications. Looking forward, it is anticipated that the high-priority air quality issues for the near-to-intermediate future will likely include: (1) routine operational forecasting of adverse air quality episodes; (2) real-time high-level support to emergency response activities; and (3) quantification of model uncertainty. Special attention is focused here on the quantification of model uncertainty through the use of ensemble simulations. Application to emergency-response dispersion modeling is illustrated using an actual event that involved the accidental release of the toxic chemical oleum. Both surface footprints of mass concentration and the associated probability distributions at individual receptors are seen to provide valuable quantitative indicators of the range of expected concentrations and their associated uncertainty.

  7. Analyzing the uncertainty of suspended sediment load prediction using sequential data assimilation

    NASA Astrophysics Data System (ADS)

    Leisenring, Marc; Moradkhani, Hamid

    2012-10-01

    SummaryA first step in understanding the impacts of sediment and controlling the sources of sediment is to quantify the mass loading. Since mass loading is the product of flow and concentration, the quantification of loads first requires the quantification of runoff volume. Using the National Weather Service's SNOW-17 and the Sacramento Soil Moisture Accounting (SAC-SMA) models, this study employed particle filter based Bayesian data assimilation methods to predict seasonal snow water equivalent (SWE) and runoff within a small watershed in the Lake Tahoe Basin located in California, USA. A procedure was developed to scale the variance multipliers (a.k.a hyperparameters) for model parameters and predictions based on the accuracy of the mean predictions relative to the ensemble spread. In addition, an online bias correction algorithm based on the lagged average bias was implemented to detect and correct for systematic bias in model forecasts prior to updating with the particle filter. Both of these methods significantly improved the performance of the particle filter without requiring excessively wide prediction bounds. The flow ensemble was linked to a non-linear regression model that was used to predict suspended sediment concentrations (SSCs) based on runoff rate and time of year. Runoff volumes and SSC were then combined to produce an ensemble of suspended sediment load estimates. Annual suspended sediment loads for the 5 years of simulation were finally computed along with 95% prediction intervals that account for uncertainty in both the SSC regression model and flow rate estimates. Understanding the uncertainty associated with annual suspended sediment load predictions is critical for making sound watershed management decisions aimed at maintaining the exceptional clarity of Lake Tahoe. The computational methods developed and applied in this research could assist with similar studies where it is important to quantify the predictive uncertainty of pollutant load estimates.

  8. Uncertainty quantification for nuclear density functional theory and information content of new measurements

    DOE PAGES

    McDonnell, J. D.; Schunck, N.; Higdon, D.; ...

    2015-03-24

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. In addition, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less

  9. Uncertainty quantification for nuclear density functional theory and information content of new measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonnell, J. D.; Schunck, N.; Higdon, D.

    2015-03-24

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. As a result, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less

  10. Static and Dynamic Model Update of an Inflatable/Rigidizable Torus Structure

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, mercedes C.

    2006-01-01

    The present work addresses the development of an experimental and computational procedure for validating finite element models. A torus structure, part of an inflatable/rigidizable Hexapod, is used to demonstrate the approach. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with optimization is used to modify key model parameters. Static test results are used to update stiffness parameters and dynamic test results are used to update the mass distribution. Updated parameters are computed using gradient and non-gradient based optimization algorithms. Results show significant improvements in model predictions after parameters are updated. Lessons learned in the areas of test procedures, modeling approaches, and uncertainties quantification are presented.

  11. Advanced Modeling and Uncertainty Quantification for Flight Dynamics; Interim Results and Challenges

    NASA Technical Reports Server (NTRS)

    Hyde, David C.; Shweyk, Kamal M.; Brown, Frank; Shah, Gautam

    2014-01-01

    As part of the NASA Vehicle Systems Safety Technologies (VSST), Assuring Safe and Effective Aircraft Control Under Hazardous Conditions (Technical Challenge #3), an effort is underway within Boeing Research and Technology (BR&T) to address Advanced Modeling and Uncertainty Quantification for Flight Dynamics (VSST1-7). The scope of the effort is to develop and evaluate advanced multidisciplinary flight dynamics modeling techniques, including integrated uncertainties, to facilitate higher fidelity response characterization of current and future aircraft configurations approaching and during loss-of-control conditions. This approach is to incorporate multiple flight dynamics modeling methods for aerodynamics, structures, and propulsion, including experimental, computational, and analytical. Also to be included are techniques for data integration and uncertainty characterization and quantification. This research shall introduce new and updated multidisciplinary modeling and simulation technologies designed to improve the ability to characterize airplane response in off-nominal flight conditions. The research shall also introduce new techniques for uncertainty modeling that will provide a unified database model comprised of multiple sources, as well as an uncertainty bounds database for each data source such that a full vehicle uncertainty analysis is possible even when approaching or beyond Loss of Control boundaries. Methodologies developed as part of this research shall be instrumental in predicting and mitigating loss of control precursors and events directly linked to causal and contributing factors, such as stall, failures, damage, or icing. The tasks will include utilizing the BR&T Water Tunnel to collect static and dynamic data to be compared to the GTM extended WT database, characterizing flight dynamics in off-nominal conditions, developing tools for structural load estimation under dynamic conditions, devising methods for integrating various modeling elements into a real-time simulation capability, generating techniques for uncertainty modeling that draw data from multiple modeling sources, and providing a unified database model that includes nominal plus increments for each flight condition. This paper presents status of testing in the BR&T water tunnel and analysis of the resulting data and efforts to characterize these data using alternative modeling methods. Program challenges and issues are also presented.

  12. Multi-fidelity uncertainty quantification in large-scale predictive simulations of turbulent flow

    NASA Astrophysics Data System (ADS)

    Geraci, Gianluca; Jofre-Cruanyes, Lluis; Iaccarino, Gianluca

    2017-11-01

    The performance characterization of complex engineering systems often relies on accurate, but computationally intensive numerical simulations. It is also well recognized that in order to obtain a reliable numerical prediction the propagation of uncertainties needs to be included. Therefore, Uncertainty Quantification (UQ) plays a fundamental role in building confidence in predictive science. Despite the great improvement in recent years, even the more advanced UQ algorithms are still limited to fairly simplified applications and only moderate parameter dimensionality. Moreover, in the case of extremely large dimensionality, sampling methods, i.e. Monte Carlo (MC) based approaches, appear to be the only viable alternative. In this talk we describe and compare a family of approaches which aim to accelerate the convergence of standard MC simulations. These methods are based on hierarchies of generalized numerical resolutions (multi-level) or model fidelities (multi-fidelity), and attempt to leverage the correlation between Low- and High-Fidelity (HF) models to obtain a more accurate statistical estimator without introducing additional HF realizations. The performance of these methods are assessed on an irradiated particle laden turbulent flow (PSAAP II solar energy receiver). This investigation was funded by the United States Department of Energy's (DoE) National Nuclear Security Administration (NNSA) under the Predicitive Science Academic Alliance Program (PSAAP) II at Stanford University.

  13. Uncertainty quantification for personalized analyses of human proximal femurs.

    PubMed

    Wille, Hagen; Ruess, Martin; Rank, Ernst; Yosibash, Zohar

    2016-02-29

    Computational models for the personalized analysis of human femurs contain uncertainties in bone material properties and loads, which affect the simulation results. To quantify the influence we developed a probabilistic framework based on polynomial chaos (PC) that propagates stochastic input variables through any computational model. We considered a stochastic E-ρ relationship and a stochastic hip contact force, representing realistic variability of experimental data. Their influence on the prediction of principal strains (ϵ1 and ϵ3) was quantified for one human proximal femur, including sensitivity and reliability analysis. Large variabilities in the principal strain predictions were found in the cortical shell of the femoral neck, with coefficients of variation of ≈40%. Between 60 and 80% of the variance in ϵ1 and ϵ3 are attributable to the uncertainty in the E-ρ relationship, while ≈10% are caused by the load magnitude and 5-30% by the load direction. Principal strain directions were unaffected by material and loading uncertainties. The antero-superior and medial inferior sides of the neck exhibited the largest probabilities for tensile and compression failure, however all were very small (pf<0.001). In summary, uncertainty quantification with PC has been demonstrated to efficiently and accurately describe the influence of very different stochastic inputs, which increases the credibility and explanatory power of personalized analyses of human proximal femurs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Propagating Mixed Uncertainties in Cyber Attacker Payoffs: Exploration of Two-Phase Monte Carlo Sampling and Probability Bounds Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.

    Securing cyber-systems on a continual basis against a multitude of adverse events is a challenging undertaking. Game-theoretic approaches, that model actions of strategic decision-makers, are increasingly being applied to address cybersecurity resource allocation challenges. Such game-based models account for multiple player actions and represent cyber attacker payoffs mostly as point utility estimates. Since a cyber-attacker’s payoff generation mechanism is largely unknown, appropriate representation and propagation of uncertainty is a critical task. In this paper we expand on prior work and focus on operationalizing the probabilistic uncertainty quantification framework, for a notional cyber system, through: 1) representation of uncertain attacker andmore » system-related modeling variables as probability distributions and mathematical intervals, and 2) exploration of uncertainty propagation techniques including two-phase Monte Carlo sampling and probability bounds analysis.« less

  15. Uncertainty quantification and validation of combined hydrological and macroeconomic analyses.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, Jacquelynne; Parks, Mancel Jordan; Jennings, Barbara Joan

    2010-09-01

    Changes in climate can lead to instabilities in physical and economic systems, particularly in regions with marginal resources. Global climate models indicate increasing global mean temperatures over the decades to come and uncertainty in the local to national impacts means perceived risks will drive planning decisions. Agent-based models provide one of the few ways to evaluate the potential changes in behavior in coupled social-physical systems and to quantify and compare risks. The current generation of climate impact analyses provides estimates of the economic cost of climate change for a limited set of climate scenarios that account for a small subsetmore » of the dynamics and uncertainties. To better understand the risk to national security, the next generation of risk assessment models must represent global stresses, population vulnerability to those stresses, and the uncertainty in population responses and outcomes that could have a significant impact on U.S. national security.« less

  16. Uncertainty in Agricultural Impact Assessment

    NASA Technical Reports Server (NTRS)

    Wallach, Daniel; Mearns, Linda O.; Rivington, Michael; Antle, John M.; Ruane, Alexander C.

    2014-01-01

    This chapter considers issues concerning uncertainty associated with modeling and its use within agricultural impact assessments. Information about uncertainty is important for those who develop assessment methods, since that information indicates the need for, and the possibility of, improvement of the methods and databases. Such information also allows one to compare alternative methods. Information about the sources of uncertainties is an aid in prioritizing further work on the impact assessment method. Uncertainty information is also necessary for those who apply assessment methods, e.g., for projecting climate change impacts on agricultural production and for stakeholders who want to use the results as part of a decision-making process (e.g., for adaptation planning). For them, uncertainty information indicates the degree of confidence they can place in the simulated results. Quantification of uncertainty also provides stakeholders with an important guideline for making decisions that are robust across the known uncertainties. Thus, uncertainty information is important for any decision based on impact assessment. Ultimately, we are interested in knowledge about uncertainty so that information can be used to achieve positive outcomes from agricultural modeling and impact assessment.

  17. Modeling transport phenomena and uncertainty quantification in solidification processes

    NASA Astrophysics Data System (ADS)

    Fezi, Kyle S.

    Direct chill (DC) casting is the primary processing route for wrought aluminum alloys. This semicontinuous process consists of primary cooling as the metal is pulled through a water cooled mold followed by secondary cooling with a water jet spray and free falling water. To gain insight into this complex solidification process, a fully transient model of DC casting was developed to predict the transport phenomena of aluminum alloys for various conditions. This model is capable of solving mixture mass, momentum, energy, and species conservation equations during multicomponent solidification. Various DC casting process parameters were examined for their effect on transport phenomena predictions in an alloy of commercial interest (aluminum alloy 7050). The practice of placing a wiper to divert cooling water from the ingot surface was studied and the results showed that placement closer to the mold causes remelting at the surface and increases susceptibility to bleed outs. Numerical models of metal alloy solidification, like the one previously mentioned, are used to gain insight into physical phenomena that cannot be observed experimentally. However, uncertainty in model inputs cause uncertainty in results and those insights. The analysis of model assumptions and probable input variability on the level of uncertainty in model predictions has not been calculated in solidification modeling as yet. As a step towards understanding the effect of uncertain inputs on solidification modeling, uncertainty quantification (UQ) and sensitivity analysis were first performed on a transient solidification model of a simple binary alloy (Al-4.5wt.%Cu) in a rectangular cavity with both columnar and equiaxed solid growth models. This analysis was followed by quantifying the uncertainty in predictions from the recently developed transient DC casting model. The PRISM Uncertainty Quantification (PUQ) framework quantified the uncertainty and sensitivity in macrosegregation, solidification time, and sump profile predictions. Uncertain model inputs of interest included the secondary dendrite arm spacing, equiaxed particle size, equiaxed packing fraction, heat transfer coefficient, and material properties. The most influential input parameters for predicting the macrosegregation level were the dendrite arm spacing, which also strongly depended on the choice of mushy zone permeability model, and the equiaxed packing fraction. Additionally, the degree of uncertainty required to produce accurate predictions depended on the output of interest from the model.

  18. Bayesian Statistics and Uncertainty Quantification for Safety Boundary Analysis in Complex Systems

    NASA Technical Reports Server (NTRS)

    He, Yuning; Davies, Misty Dawn

    2014-01-01

    The analysis of a safety-critical system often requires detailed knowledge of safe regions and their highdimensional non-linear boundaries. We present a statistical approach to iteratively detect and characterize the boundaries, which are provided as parameterized shape candidates. Using methods from uncertainty quantification and active learning, we incrementally construct a statistical model from only few simulation runs and obtain statistically sound estimates of the shape parameters for safety boundaries.

  19. Performance of uncertainty quantification methodologies and linear solvers in cardiovascular simulations

    NASA Astrophysics Data System (ADS)

    Seo, Jongmin; Schiavazzi, Daniele; Marsden, Alison

    2017-11-01

    Cardiovascular simulations are increasingly used in clinical decision making, surgical planning, and disease diagnostics. Patient-specific modeling and simulation typically proceeds through a pipeline from anatomic model construction using medical image data to blood flow simulation and analysis. To provide confidence intervals on simulation predictions, we use an uncertainty quantification (UQ) framework to analyze the effects of numerous uncertainties that stem from clinical data acquisition, modeling, material properties, and boundary condition selection. However, UQ poses a computational challenge requiring multiple evaluations of the Navier-Stokes equations in complex 3-D models. To achieve efficiency in UQ problems with many function evaluations, we implement and compare a range of iterative linear solver and preconditioning techniques in our flow solver. We then discuss applications to patient-specific cardiovascular simulation and how the problem/boundary condition formulation in the solver affects the selection of the most efficient linear solver. Finally, we discuss performance improvements in the context of uncertainty propagation. Support from National Institute of Health (R01 EB018302) is greatly appreciated.

  20. Uncertainty quantification and optimal decisions

    PubMed Central

    2017-01-01

    A mathematical model can be analysed to construct policies for action that are close to optimal for the model. If the model is accurate, such policies will be close to optimal when implemented in the real world. In this paper, the different aspects of an ideal workflow are reviewed: modelling, forecasting, evaluating forecasts, data assimilation and constructing control policies for decision-making. The example of the oil industry is used to motivate the discussion, and other examples, such as weather forecasting and precision agriculture, are used to argue that the same mathematical ideas apply in different contexts. Particular emphasis is placed on (i) uncertainty quantification in forecasting and (ii) how decisions are optimized and made robust to uncertainty in models and judgements. This necessitates full use of the relevant data and by balancing costs and benefits into the long term may suggest policies quite different from those relevant to the short term. PMID:28484343

  1. Modelling Freshwater Resources at the Global Scale: Challenges and Prospects

    NASA Technical Reports Server (NTRS)

    Doll, Petra; Douville, Herve; Guntner, Andreas; Schmied, Hannes Muller; Wada, Yoshihide

    2015-01-01

    Quantification of spatially and temporally resolved water flows and water storage variations for all land areas of the globe is required to assess water resources, water scarcity and flood hazards, and to understand the Earth system. This quantification is done with the help of global hydrological models (GHMs). What are the challenges and prospects in the development and application of GHMs? Seven important challenges are presented. (1) Data scarcity makes quantification of human water use difficult even though significant progress has been achieved in the last decade. (2) Uncertainty of meteorological input data strongly affects model outputs. (3) The reaction of vegetation to changing climate and CO2 concentrations is uncertain and not taken into account in most GHMs that serve to estimate climate change impacts. (4) Reasons for discrepant responses of GHMs to changing climate have yet to be identified. (5) More accurate estimates of monthly time series of water availability and use are needed to provide good indicators of water scarcity. (6) Integration of gradient-based groundwater modelling into GHMs is necessary for a better simulation of groundwater-surface water interactions and capillary rise. (7) Detection and attribution of human interference with freshwater systems by using GHMs are constrained by data of insufficient quality but also GHM uncertainty itself. Regarding prospects for progress, we propose to decrease the uncertainty of GHM output by making better use of in situ and remotely sensed observations of output variables such as river discharge or total water storage variations by multi-criteria validation, calibration or data assimilation. Finally, we present an initiative that works towards the vision of hyper resolution global hydrological modelling where GHM outputs would be provided at a 1-km resolution with reasonable accuracy.

  2. Improved Uncertainty Quantification in Groundwater Flux Estimation Using GRACE

    NASA Astrophysics Data System (ADS)

    Reager, J. T., II; Rao, P.; Famiglietti, J. S.; Turmon, M.

    2015-12-01

    Groundwater change is difficult to monitor over large scales. One of the most successful approaches is in the remote sensing of time-variable gravity using NASA Gravity Recovery and Climate Experiment (GRACE) mission data, and successful case studies have created the opportunity to move towards a global groundwater monitoring framework for the world's largest aquifers. To achieve these estimates, several approximations are applied, including those in GRACE processing corrections, the formulation of the formal GRACE errors, destriping and signal recovery, and the numerical model estimation of snow water, surface water and soil moisture storage states used to isolate a groundwater component. A major weakness in these approaches is inconsistency: different studies have used different sources of primary and ancillary data, and may achieve different results based on alternative choices in these approximations. In this study, we present two cases of groundwater change estimation in California and the Colorado River basin, selected for their good data availability and varied climates. We achieve a robust numerical estimate of post-processing uncertainties resulting from land-surface model structural shortcomings and model resolution errors. Groundwater variations should demonstrate less variability than the overlying soil moisture state does, as groundwater has a longer memory of past events due to buffering by infiltration and drainage rate limits. We apply a model ensemble approach in a Bayesian framework constrained by the assumption of decreasing signal variability with depth in the soil column. We also discuss time variable errors vs. time constant errors, across-scale errors v. across-model errors, and error spectral content (across scales and across model). More robust uncertainty quantification for GRACE-based groundwater estimates would take all of these issues into account, allowing for more fair use in management applications and for better integration of GRACE-based measurements with observations from other sources.

  3. Quantification of LiDAR measurement uncertainty through propagation of errors due to sensor sub-systems and terrain morphology

    NASA Astrophysics Data System (ADS)

    Goulden, T.; Hopkinson, C.

    2013-12-01

    The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future work in LiDAR sensor measurement uncertainty must focus on the development of vegetative error models to create more robust error prediction algorithms. To achieve this objective, comprehensive empirical exploratory analysis is recommended to relate vegetative parameters to observed errors.

  4. Modeling of structural uncertainties in Reynolds-averaged Navier-Stokes closures

    NASA Astrophysics Data System (ADS)

    Emory, Michael; Larsson, Johan; Iaccarino, Gianluca

    2013-11-01

    Estimation of the uncertainty in numerical predictions by Reynolds-averaged Navier-Stokes closures is a vital step in building confidence in such predictions. An approach to model-form uncertainty quantification that does not assume the eddy-viscosity hypothesis to be exact is proposed. The methodology for estimation of uncertainty is demonstrated for plane channel flow, for a duct with secondary flows, and for the shock/boundary-layer interaction over a transonic bump.

  5. Eigenspace perturbations for structural uncertainty estimation of turbulence closure models

    NASA Astrophysics Data System (ADS)

    Jofre, Lluis; Mishra, Aashwin; Iaccarino, Gianluca

    2017-11-01

    With the present state of computational resources, a purely numerical resolution of turbulent flows encountered in engineering applications is not viable. Consequently, investigations into turbulence rely on various degrees of modeling. Archetypal amongst these variable resolution approaches would be RANS models in two-equation closures, and subgrid-scale models in LES. However, owing to the simplifications introduced during model formulation, the fidelity of all such models is limited, and therefore the explicit quantification of the predictive uncertainty is essential. In such scenario, the ideal uncertainty estimation procedure must be agnostic to modeling resolution, methodology, and the nature or level of the model filter. The procedure should be able to give reliable prediction intervals for different Quantities of Interest, over varied flows and flow conditions, and at diametric levels of modeling resolution. In this talk, we present and substantiate the Eigenspace perturbation framework as an uncertainty estimation paradigm that meets these criteria. Commencing from a broad overview, we outline the details of this framework at different modeling resolution. Thence, using benchmark flows, along with engineering problems, the efficacy of this procedure is established. This research was partially supported by NNSA under the Predictive Science Academic Alliance Program (PSAAP) II, and by DARPA under the Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) project (technical monitor: Dr Fariba Fahroo).

  6. A General Uncertainty Quantification Methodology for Cloud Microphysical Property Retrievals

    NASA Astrophysics Data System (ADS)

    Tang, Q.; Xie, S.; Chen, X.; Zhao, C.

    2014-12-01

    The US Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) program provides long-term (~20 years) ground-based cloud remote sensing observations. However, there are large uncertainties in the retrieval products of cloud microphysical properties based on the active and/or passive remote-sensing measurements. To address this uncertainty issue, a DOE Atmospheric System Research scientific focus study, Quantification of Uncertainties in Cloud Retrievals (QUICR), has been formed. In addition to an overview of recent progress of QUICR, we will demonstrate the capacity of an observation-based general uncertainty quantification (UQ) methodology via the ARM Climate Research Facility baseline cloud microphysical properties (MICROBASE) product. This UQ method utilizes the Karhunen-Loéve expansion (KLE) and Central Limit Theorems (CLT) to quantify the retrieval uncertainties from observations and algorithm parameters. The input perturbations are imposed on major modes to take into account the cross correlations between input data, which greatly reduces the dimension of random variables (up to a factor of 50) and quantifies vertically resolved full probability distribution functions of retrieved quantities. Moreover, this KLE/CLT approach has the capability of attributing the uncertainties in the retrieval output to individual uncertainty source and thus sheds light on improving the retrieval algorithm and observations. We will present the results of a case study for the ice water content at the Southern Great Plains during an intensive observing period on March 9, 2000. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  7. On the quantification and efficient propagation of imprecise probabilities resulting from small datasets

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaxin; Shields, Michael D.

    2018-01-01

    This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.

  8. Survey of Existing Uncertainty Quantification Capabilities for Army Relevant Problems

    DTIC Science & Technology

    2017-11-27

    ARL-TR-8218•NOV 2017 US Army Research Laboratory Survey of Existing Uncertainty Quantification Capabilities for Army-Relevant Problems by James J...NOV 2017 US Army Research Laboratory Survey of Existing Uncertainty Quantification Capabilities for Army-Relevant Problems by James J Ramsey...Rev. 8/98)    Prescribed by ANSI Std. Z39.18 November 2017 Technical Report Survey of Existing Uncertainty Quantification Capabilities for Army

  9. Inventory Uncertainty Quantification using TENDL Covariance Data in Fispact-II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eastwood, J.W.; Morgan, J.G.; Sublet, J.-Ch., E-mail: jean-christophe.sublet@ccfe.ac.uk

    2015-01-15

    The new inventory code Fispact-II provides predictions of inventory, radiological quantities and their uncertainties using nuclear data covariance information. Central to the method is a novel fast pathways search algorithm using directed graphs. The pathways output provides (1) an aid to identifying important reactions, (2) fast estimates of uncertainties, (3) reduced models that retain important nuclides and reactions for use in the code's Monte Carlo sensitivity analysis module. Described are the methods that are being implemented for improving uncertainty predictions, quantification and propagation using the covariance data that the recent nuclear data libraries contain. In the TENDL library, above themore » upper energy of the resolved resonance range, a Monte Carlo method in which the covariance data come from uncertainties of the nuclear model calculations is used. The nuclear data files are read directly by FISPACT-II without any further intermediate processing. Variance and covariance data are processed and used by FISPACT-II to compute uncertainties in collapsed cross sections, and these are in turn used to predict uncertainties in inventories and all derived radiological data.« less

  10. A methodology for uncertainty quantification in quantitative technology valuation based on expert elicitation

    NASA Astrophysics Data System (ADS)

    Akram, Muhammad Farooq Bin

    The management of technology portfolios is an important element of aerospace system design. New technologies are often applied to new product designs to ensure their competitiveness at the time they are introduced to market. The future performance of yet-to- be designed components is inherently uncertain, necessitating subject matter expert knowledge, statistical methods and financial forecasting. Estimates of the appropriate parameter settings often come from disciplinary experts, who may disagree with each other because of varying experience and background. Due to inherent uncertain nature of expert elicitation in technology valuation process, appropriate uncertainty quantification and propagation is very critical. The uncertainty in defining the impact of an input on performance parameters of a system makes it difficult to use traditional probability theory. Often the available information is not enough to assign the appropriate probability distributions to uncertain inputs. Another problem faced during technology elicitation pertains to technology interactions in a portfolio. When multiple technologies are applied simultaneously on a system, often their cumulative impact is non-linear. Current methods assume that technologies are either incompatible or linearly independent. It is observed that in case of lack of knowledge about the problem, epistemic uncertainty is the most suitable representation of the process. It reduces the number of assumptions during the elicitation process, when experts are forced to assign probability distributions to their opinions without sufficient knowledge. Epistemic uncertainty can be quantified by many techniques. In present research it is proposed that interval analysis and Dempster-Shafer theory of evidence are better suited for quantification of epistemic uncertainty in technology valuation process. Proposed technique seeks to offset some of the problems faced by using deterministic or traditional probabilistic approaches for uncertainty propagation. Non-linear behavior in technology interactions is captured through expert elicitation based technology synergy matrices (TSM). Proposed TSMs increase the fidelity of current technology forecasting methods by including higher order technology interactions. A test case for quantification of epistemic uncertainty on a large scale problem of combined cycle power generation system was selected. A detailed multidisciplinary modeling and simulation environment was adopted for this problem. Results have shown that evidence theory based technique provides more insight on the uncertainties arising from incomplete information or lack of knowledge as compared to deterministic or probability theory methods. Margin analysis was also carried out for both the techniques. A detailed description of TSMs and their usage in conjunction with technology impact matrices and technology compatibility matrices is discussed. Various combination methods are also proposed for higher order interactions, which can be applied according to the expert opinion or historical data. The introduction of technology synergy matrix enabled capturing the higher order technology interactions, and improvement in predicted system performance.

  11. Final Report, DOE Early Career Award: Predictive modeling of complex physical systems: new tools for statistical inference, uncertainty quantification, and experimental design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef

    Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesianmore » inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.« less

  12. How well do we know the incoming solar infrared radiation?

    NASA Astrophysics Data System (ADS)

    Elsey, Jonathan; Coleman, Marc; Gardiner, Tom; Shine, Keith

    2017-04-01

    The solar spectral irradiance (SSI) has been identified as a key climate variable by the Global Climate Observing System (Bojinski et al. 2014, Bull. Amer. Meteor. Soc.). It is of importance in the modelling of atmospheric radiative transfer, and the quantification of the global energy budget. However, in the near-infrared spectral region (between 2000-10000 cm-1) there exists a discrepancy of 7% between spectra measured from the space-based SOLSPEC instrument (Thuillier et al. 2015, Solar Physics) and those from a ground-based Langley technique (Bolseé et al. 2014, Solar Physics). This same difference is also present between different analyses of the SOLSPEC data. This work aims to reconcile some of these differences by presenting an estimate of the near-infrared SSI obtained from ground-based measurements taken using an absolutely calibrated Fourier transform spectrometer. Spectra are obtained both using the Langley technique and by direct comparison with a radiative transfer model, with appropriate handling of both aerosol scattering and molecular continuum absorption. Particular focus is dedicated to the quantification of uncertainty in these spectra, from both the inherent uncertainty in the measurement setup and that from the use of the radiative transfer code and its inputs.

  13. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sattison, M.B.; Blackman, H.S.; Novack, S.D.

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methods, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3)more » enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sattison, M.B.; Blackman, H.S.; Novack, S.D.

    The Office for Analysis and Evaluation of Operational Data (AEOD) has sought the assistance of the Idaho National Engineering Laboratory (INEL) to make some significant enhancements to the SAPHIRE-based Accident Sequence Precursor (ASP) models recently developed by the INEL. The challenge of this project is to provide the features of a full-scale PRA within the framework of the simplified ASP models. Some of these features include: (1) uncertainty analysis addressing the standard PRA uncertainties and the uncertainties unique to the ASP models and methodology, (2) incorporation and proper quantification of individual human actions and the interaction among human actions, (3)more » enhanced treatment of common cause failures, and (4) extension of the ASP models to more closely mimic full-scale PRAs (inclusion of more initiators, explicitly modeling support system failures, etc.). This paper provides an overview of the methods being used to make the above improvements.« less

  16. CASL Dakota Capabilities Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Simmons, Chris; Williams, Brian J.

    2017-10-10

    The Dakota software project serves the mission of Sandia National Laboratories and supports a worldwide user community by delivering state-of-the-art research and robust, usable software for optimization and uncertainty quantification. These capabilities enable advanced exploration and riskinformed prediction with a wide range of computational science and engineering models. Dakota is the verification and validation (V&V) / uncertainty quantification (UQ) software delivery vehicle for CASL, allowing analysts across focus areas to apply these capabilities to myriad nuclear engineering analyses.

  17. Enhancing sparsity of Hermite polynomial expansions by iterative rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Baker, Nathan A.

    2016-02-01

    Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.

  18. Non-parametric data-based approach for the quantification and communication of uncertainties in river flood forecasts

    NASA Astrophysics Data System (ADS)

    Van Steenbergen, N.; Willems, P.

    2012-04-01

    Reliable flood forecasts are the most important non-structural measures to reduce the impact of floods. However flood forecasting systems are subject to uncertainty originating from the input data, model structure and model parameters of the different hydraulic and hydrological submodels. To quantify this uncertainty a non-parametric data-based approach has been developed. This approach analyses the historical forecast residuals (differences between the predictions and the observations at river gauging stations) without using a predefined statistical error distribution. Because the residuals are correlated with the value of the forecasted water level and the lead time, the residuals are split up into discrete classes of simulated water levels and lead times. For each class, percentile values are calculated of the model residuals and stored in a 'three dimensional error' matrix. By 3D interpolation in this error matrix, the uncertainty in new forecasted water levels can be quantified. In addition to the quantification of the uncertainty, the communication of this uncertainty is equally important. The communication has to be done in a consistent way, reducing the chance of misinterpretation. Also, the communication needs to be adapted to the audience; the majority of the larger public is not interested in in-depth information on the uncertainty on the predicted water levels, but only is interested in information on the likelihood of exceedance of certain alarm levels. Water managers need more information, e.g. time dependent uncertainty information, because they rely on this information to undertake the appropriate flood mitigation action. There are various ways in presenting uncertainty information (numerical, linguistic, graphical, time (in)dependent, etc.) each with their advantages and disadvantages for a specific audience. A useful method to communicate uncertainty of flood forecasts is by probabilistic flood mapping. These maps give a representation of the probability of flooding of a certain area, based on the uncertainty assessment of the flood forecasts. By using this type of maps, water managers can focus their attention on the areas with the highest flood probability. Also the larger public can consult these maps for information on the probability of flooding for their specific location, such that they can take pro-active measures to reduce the personal damage. The method of quantifying the uncertainty was implemented in the operational flood forecasting system for the navigable rivers in the Flanders region of Belgium. The method has shown clear benefits during the floods of the last two years.

  19. An automated construction of error models for uncertainty quantification and model calibration

    NASA Astrophysics Data System (ADS)

    Josset, L.; Lunati, I.

    2015-12-01

    To reduce the computational cost of stochastic predictions, it is common practice to rely on approximate flow solvers (or «proxy»), which provide an inexact, but computationally inexpensive response [1,2]. Error models can be constructed to correct the proxy response: based on a learning set of realizations for which both exact and proxy simulations are performed, a transformation is sought to map proxy into exact responses. Once the error model is constructed a prediction of the exact response is obtained at the cost of a proxy simulation for any new realization. Despite its effectiveness [2,3], the methodology relies on several user-defined parameters, which impact the accuracy of the predictions. To achieve a fully automated construction, we propose a novel methodology based on an iterative scheme: we first initialize the error model with a small training set of realizations; then, at each iteration, we add a new realization both to improve the model and to evaluate its performance. More specifically, at each iteration we use the responses predicted by the updated model to identify the realizations that need to be considered to compute the quantity of interest. Another user-defined parameter is the number of dimensions of the response spaces between which the mapping is sought. To identify the space dimensions that optimally balance mapping accuracy and risk of overfitting, we follow a Leave-One-Out Cross Validation. Also, the definition of a stopping criterion is central to an automated construction. We use a stability measure based on bootstrap techniques to stop the iterative procedure when the iterative model has converged. The methodology is illustrated with two test cases in which an inverse problem has to be solved and assess the performance of the method. We show that an iterative scheme is crucial to increase the applicability of the approach. [1] Josset, L., and I. Lunati, Local and global error models for improving uncertainty quantification, Math.ematical Geosciences, 2013 [2] Josset, L., D. Ginsbourger, and I. Lunati, Functional Error Modeling for uncertainty quantification in hydrogeology, Water Resources Research, 2015 [3] Josset, L., V. Demyanov, A.H. Elsheikhb, and I. Lunati, Accelerating Monte Carlo Markov chains with proxy and error models, Computer & Geosciences, 2015 (In press)

  20. Uncertainty analysis of hydrological modeling in a tropical area using different algorithms

    NASA Astrophysics Data System (ADS)

    Rafiei Emam, Ammar; Kappas, Martin; Fassnacht, Steven; Linh, Nguyen Hoang Khanh

    2018-01-01

    Hydrological modeling outputs are subject to uncertainty resulting from different sources of errors (e.g., error in input data, model structure, and model parameters), making quantification of uncertainty in hydrological modeling imperative and meant to improve reliability of modeling results. The uncertainty analysis must solve difficulties in calibration of hydrological models, which further increase in areas with data scarcity. The purpose of this study is to apply four uncertainty analysis algorithms to a semi-distributed hydrological model, quantifying different source of uncertainties (especially parameter uncertainty) and evaluate their performance. In this study, the Soil and Water Assessment Tools (SWAT) eco-hydrological model was implemented for the watershed in the center of Vietnam. The sensitivity of parameters was analyzed, and the model was calibrated. The uncertainty analysis for the hydrological model was conducted based on four algorithms: Generalized Likelihood Uncertainty Estimation (GLUE), Sequential Uncertainty Fitting (SUFI), Parameter Solution method (ParaSol) and Particle Swarm Optimization (PSO). The performance of the algorithms was compared using P-factor and Rfactor, coefficient of determination (R 2), the Nash Sutcliffe coefficient of efficiency (NSE) and Percent Bias (PBIAS). The results showed the high performance of SUFI and PSO with P-factor>0.83, R-factor <0.56 and R 2>0.91, NSE>0.89, and 0.18

  1. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    NASA Astrophysics Data System (ADS)

    Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.

    2018-03-01

    We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  2. Sonic Boom Pressure Signature Uncertainty Calculation and Propagation to Ground Noise

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Bretl, Katherine N.; Walker, Eric L.; Pinier, Jeremy T.

    2015-01-01

    The objective of this study was to outline an approach for the quantification of uncertainty in sonic boom measurements and to investigate the effect of various near-field uncertainty representation approaches on ground noise predictions. These approaches included a symmetric versus asymmetric uncertainty band representation and a dispersion technique based on a partial sum Fourier series that allows for the inclusion of random error sources in the uncertainty. The near-field uncertainty was propagated to the ground level, along with additional uncertainty in the propagation modeling. Estimates of perceived loudness were obtained for the various types of uncertainty representation in the near-field. Analyses were performed on three configurations of interest to the sonic boom community: the SEEB-ALR, the 69o DeltaWing, and the LM 1021-01. Results showed that representation of the near-field uncertainty plays a key role in ground noise predictions. Using a Fourier series based dispersion approach can double the amount of uncertainty in the ground noise compared to a pure bias representation. Compared to previous computational fluid dynamics results, uncertainty in ground noise predictions were greater when considering the near-field experimental uncertainty.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engel, David W.; Jarman, Kenneth D.; Xu, Zhijie

    This report describes our initial research to quantify uncertainties in the identification and characterization of possible attack states in a network. As a result, we should be able to estimate the current state in which the network is operating, based on a wide variety of network data, and attach a defensible measure of confidence to these state estimates. The output of this research will be new uncertainty quantification (UQ) methods to help develop a process for model development and apply UQ to characterize attacks/adversaries, create an understanding of the degree to which methods scale to "big" data, and offer methodsmore » for addressing model approaches with regard to validation and accuracy.« less

  4. Hydrologic Impacts of Climate Change: Quantification of Uncertainties (Alexander von Humboldt Medal Lecture)

    NASA Astrophysics Data System (ADS)

    Mujumdar, Pradeep P.

    2014-05-01

    Climate change results in regional hydrologic change. The three prominent signals of global climate change, viz., increase in global average temperatures, rise in sea levels and change in precipitation patterns convert into signals of regional hydrologic change in terms of modifications in water availability, evaporative water demand, hydrologic extremes of floods and droughts, water quality, salinity intrusion in coastal aquifers, groundwater recharge and other related phenomena. A major research focus in hydrologic sciences in recent years has been assessment of impacts of climate change at regional scales. An important research issue addressed in this context deals with responses of water fluxes on a catchment scale to the global climatic change. A commonly adopted methodology for assessing the regional hydrologic impacts of climate change is to use the climate projections provided by the General Circulation Models (GCMs) for specified emission scenarios in conjunction with the process-based hydrologic models to generate the corresponding hydrologic projections. The scaling problem arising because of the large spatial scales at which the GCMs operate compared to those required in distributed hydrologic models, and their inability to satisfactorily simulate the variables of interest to hydrology are addressed by downscaling the GCM simulations to hydrologic scales. Projections obtained with this procedure are burdened with a large uncertainty introduced by the choice of GCMs and emission scenarios, small samples of historical data against which the models are calibrated, downscaling methods used and other sources. Development of methodologies to quantify and reduce such uncertainties is a current area of research in hydrology. In this presentation, an overview of recent research carried out by the author's group on assessment of hydrologic impacts of climate change addressing scale issues and quantification of uncertainties is provided. Methodologies developed with conditional random fields, Dempster-Shafer theory, possibility theory, imprecise probabilities and non-stationary extreme value theory are discussed. Specific applications on uncertainty quantification in impacts on streamflows, evaporative water demands, river water quality and urban flooding are presented. A brief discussion on detection and attribution of hydrologic change at river basin scales, contribution of landuse change and likely alterations in return levels of hydrologic extremes is also provided.

  5. Quantification of dose uncertainties for the bladder in prostate cancer radiotherapy based on dominant eigenmodes

    NASA Astrophysics Data System (ADS)

    Rios, Richard; Acosta, Oscar; Lafond, Caroline; Espinosa, Jairo; de Crevoisier, Renaud

    2017-11-01

    In radiotherapy for prostate cancer the dose at the treatment planning for the bladder may be a bad surrogate of the actual delivered dose as the bladder presents the largest inter-fraction shape variations during treatment. This paper presents PCA models as a virtual tool to estimate dosimetric uncertainties for the bladder produced by motion and deformation between fractions. Our goal is to propose a methodology to determine the minimum number of modes required to quantify dose uncertainties of the bladder for motion/deformation models based on PCA. We trained individual PCA models using the bladder contours available from three patients with a planning computed tomography (CT) and on-treatment cone-beam CTs (CBCTs). Based on the above models and via deformable image registration (DIR), we estimated two accumulated doses: firstly, an accumulated dose obtained by integrating the planning dose over the Gaussian probability distribution of the PCA model; and secondly, an accumulated dose obtained by simulating treatment courses via a Monte Carlo approach. We also computed a reference accumulated dose for each patient using his available images via DIR. Finally, we compared the planning dose with the three accumulated doses, and we calculated local dose variability and dose-volume histogram uncertainties.

  6. Uncertainty quantification in Rothermel's Model using an efficient sampling method

    Treesearch

    Edwin Jimenez; M. Yousuff Hussaini; Scott L. Goodrick

    2007-01-01

    The purpose of the present work is to quantify parametric uncertainty in Rothermel’s wildland fire spread model (implemented in software such as BehavePlus3 and FARSITE), which is undoubtedly among the most widely used fire spread models in the United States. This model consists of a nonlinear system of equations that relates environmental variables (input parameter...

  7. Robust approaches to quantification of margin and uncertainty for sparse data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hund, Lauren; Schroeder, Benjamin B.; Rumsey, Kelin

    Characterizing the tails of probability distributions plays a key role in quantification of margins and uncertainties (QMU), where the goal is characterization of low probability, high consequence events based on continuous measures of performance. When data are collected using physical experimentation, probability distributions are typically fit using statistical methods based on the collected data, and these parametric distributional assumptions are often used to extrapolate about the extreme tail behavior of the underlying probability distribution. In this project, we character- ize the risk associated with such tail extrapolation. Specifically, we conducted a scaling study to demonstrate the large magnitude of themore » risk; then, we developed new methods for communicat- ing risk associated with tail extrapolation from unvalidated statistical models; lastly, we proposed a Bayesian data-integration framework to mitigate tail extrapolation risk through integrating ad- ditional information. We conclude that decision-making using QMU is a complex process that cannot be achieved using statistical analyses alone.« less

  8. Quantification of the Uncertainties for the Space Launch System Liftoff/Transition and Ascent Databases

    NASA Technical Reports Server (NTRS)

    Favaregh, Amber L.; Houlden, Heather P.; Pinier, Jeremy T.

    2016-01-01

    A detailed description of the uncertainty quantification process for the Space Launch System Block 1 vehicle configuration liftoff/transition and ascent 6-Degree-of-Freedom (DOF) aerodynamic databases is presented. These databases were constructed from wind tunnel test data acquired in the NASA Langley Research Center 14- by 22-Foot Subsonic Wind Tunnel and the Boeing Polysonic Wind Tunnel in St. Louis, MO, respectively. The major sources of error for these databases were experimental error and database modeling errors.

  9. Quantification of the Uncertainties for the Ares I A106 Ascent Aerodynamic Database

    NASA Technical Reports Server (NTRS)

    Houlden, Heather P.; Favaregh, Amber L.

    2010-01-01

    A detailed description of the quantification of uncertainties for the Ares I ascent aero 6-DOF wind tunnel database is presented. The database was constructed from wind tunnel test data and CFD results. The experimental data came from tests conducted in the Boeing Polysonic Wind Tunnel in St. Louis and the Unitary Plan Wind Tunnel at NASA Langley Research Center. The major sources of error for this database were: experimental error (repeatability), database modeling errors, and database interpolation errors.

  10. Uncertainty Quantification For Physical and Numerical Diffusion Models In Inertial Confinement Fusion Simulations

    NASA Astrophysics Data System (ADS)

    Rana, Verinder S.

    This thesis concerns simulations of Inertial Confinement Fusion. Inertial confinement is carried out in a large scale facility at National Ignition Facility. The experiments have failed to reproduce design calculations, and so uncertainty quantification of calculations is an important asset. Uncertainties can be classified as aleatoric or epistemic. This thesis is concerned with aleatoric uncertainty quantification. Among the many uncertain aspects that affect the simulations, we have narrowed our study of possible uncertainties. The first source of uncertainty we present is the amount of pre-heating of the fuel done by hot electrons. The second source of uncertainty we consider is the effect of the algorithmic and physical transport diffusion and their effect on the hot spot thermodynamics. Physical transport mechanisms play an important role for the entire duration of the ICF capsule, so modeling them correctly becomes extremely vital. In addition, codes that simulate material mixing introduce numerical (algorithmically) generated transport across the material interfaces. This adds another layer of uncertainty in the solution through the artificially added diffusion. The third source of uncertainty we consider is physical model uncertainty. The fourth source of uncertainty we focus on a single localized surface perturbation (a divot) which creates a perturbation to the solution that can potentially enter the hot spot to diminish the thermonuclear environment. Jets of ablator material are hypothesized to enter the hot spot and cool the core, contributing to the observed lower reactions than predicted levels. A plasma transport package, Transport for Inertial Confinement Fusion (TICF) has been implemented into the Radiation Hydrodynamics code FLASH, from the University of Chicago. TICF has thermal, viscous and mass diffusion models that span the entire ICF implosion regime. We introduced a Quantum Molecular Dynamics calibrated thermal conduction model due to Hu for thermal transport. The numerical approximation uncertainties are introduced by the choice of a hydrodynamic solver for a particular flow. Solvers tend to be diffusive at material interfaces and the Front Tracking (FT) algorithm, which is an already available software code in the form of an API, helps to ameliorate such effects. The FT algorithm has also been implemented in FLASH and we use this to study the effect that divots can have on the hot spot properties.

  11. Evidence Theory Based Uncertainty Quantification in Radiological Risk due to Accidental Release of Radioactivity from a Nuclear Power Plant

    NASA Astrophysics Data System (ADS)

    Ingale, S. V.; Datta, D.

    2010-10-01

    Consequence of the accidental release of radioactivity from a nuclear power plant is assessed in terms of exposure or dose to the members of the public. Assessment of risk is routed through this dose computation. Dose computation basically depends on the basic dose assessment model and exposure pathways. One of the exposure pathways is the ingestion of contaminated food. The aim of the present paper is to compute the uncertainty associated with the risk to the members of the public due to the ingestion of contaminated food. The governing parameters of the ingestion dose assessment model being imprecise, we have approached evidence theory to compute the bound of the risk. The uncertainty is addressed by the belief and plausibility fuzzy measures.

  12. Model Uncertainty Quantification Methods For Data Assimilation In Partially Observed Multi-Scale Systems

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; van Leeuwen, P. J.

    2017-12-01

    Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.

  13. Quantifying natural delta variability using a multiple-point geostatistics prior uncertainty model

    NASA Astrophysics Data System (ADS)

    Scheidt, Céline; Fernandes, Anjali M.; Paola, Chris; Caers, Jef

    2016-10-01

    We address the question of quantifying uncertainty associated with autogenic pattern variability in a channelized transport system by means of a modern geostatistical method. This question has considerable relevance for practical subsurface applications as well, particularly those related to uncertainty quantification relying on Bayesian approaches. Specifically, we show how the autogenic variability in a laboratory experiment can be represented and reproduced by a multiple-point geostatistical prior uncertainty model. The latter geostatistical method requires selection of a limited set of training images from which a possibly infinite set of geostatistical model realizations, mimicking the training image patterns, can be generated. To that end, we investigate two methods to determine how many training images and what training images should be provided to reproduce natural autogenic variability. The first method relies on distance-based clustering of overhead snapshots of the experiment; the second method relies on a rate of change quantification by means of a computer vision algorithm termed the demon algorithm. We show quantitatively that with either training image selection method, we can statistically reproduce the natural variability of the delta formed in the experiment. In addition, we study the nature of the patterns represented in the set of training images as a representation of the "eigenpatterns" of the natural system. The eigenpattern in the training image sets display patterns consistent with previous physical interpretations of the fundamental modes of this type of delta system: a highly channelized, incisional mode; a poorly channelized, depositional mode; and an intermediate mode between the two.

  14. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    NASA Astrophysics Data System (ADS)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  15. Quantile-based bias correction and uncertainty quantification of extreme event attribution statements

    DOE PAGES

    Jeon, Soyoung; Paciorek, Christopher J.; Wehner, Michael F.

    2016-02-16

    Extreme event attribution characterizes how anthropogenic climate change may have influenced the probability and magnitude of selected individual extreme weather and climate events. Attribution statements often involve quantification of the fraction of attributable risk (FAR) or the risk ratio (RR) and associated confidence intervals. Many such analyses use climate model output to characterize extreme event behavior with and without anthropogenic influence. However, such climate models may have biases in their representation of extreme events. To account for discrepancies in the probabilities of extreme events between observational datasets and model datasets, we demonstrate an appropriate rescaling of the model output basedmore » on the quantiles of the datasets to estimate an adjusted risk ratio. Our methodology accounts for various components of uncertainty in estimation of the risk ratio. In particular, we present an approach to construct a one-sided confidence interval on the lower bound of the risk ratio when the estimated risk ratio is infinity. We demonstrate the methodology using the summer 2011 central US heatwave and output from the Community Earth System Model. In this example, we find that the lower bound of the risk ratio is relatively insensitive to the magnitude and probability of the actual event.« less

  16. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.

  17. Multiscale Modeling and Uncertainty Quantification for Nuclear Fuel Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estep, Donald; El-Azab, Anter; Pernice, Michael

    2017-03-23

    In this project, we will address the challenges associated with constructing high fidelity multiscale models of nuclear fuel performance. We (*) propose a novel approach for coupling mesoscale and macroscale models, (*) devise efficient numerical methods for simulating the coupled system, and (*) devise and analyze effective numerical approaches for error and uncertainty quantification for the coupled multiscale system. As an integral part of the project, we will carry out analysis of the effects of upscaling and downscaling, investigate efficient methods for stochastic sensitivity analysis of the individual macroscale and mesoscale models, and carry out a posteriori error analysis formore » computed results. We will pursue development and implementation of solutions in software used at Idaho National Laboratories on models of interest to the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program.« less

  18. Evaluation of the ²³⁹Pu prompt fission neutron spectrum induced by neutrons of 500 keV and associated covariances

    DOE PAGES

    Neudecker, D.; Talou, P.; Kawano, T.; ...

    2015-08-01

    We present evaluations of the prompt fission neutron spectrum (PFNS) of ²³⁹Pu induced by 500 keV neutrons, and associated covariances. In a previous evaluation by Talou et al. 2010, surprisingly low evaluated uncertainties were obtained, partly due to simplifying assumptions in the quantification of uncertainties from experiment and model. Therefore, special emphasis is placed here on a thorough uncertainty quantification of experimental data and of the Los Alamos model predicted values entering the evaluation. In addition, the Los Alamos model was extended and an evaluation technique was employed that takes into account the qualitative differences between normalized model predicted valuesmore » and experimental shape data. These improvements lead to changes in the evaluated PFNS and overall larger evaluated uncertainties than in the previous work. However, these evaluated uncertainties are still smaller than those obtained in a statistical analysis using experimental information only, due to strong model correlations. Hence, suggestions to estimate model defect uncertainties are presented, which lead to more reasonable evaluated uncertainties. The calculated k eff of selected criticality benchmarks obtained with these new evaluations agree with each other within their uncertainties despite the different approaches to estimate model defect uncertainties. The k eff one standard deviations overlap with some of those obtained using ENDF/B-VII.1, albeit their mean values are further away from unity. Spectral indexes for the Jezebel critical assembly calculated with the newly evaluated PFNS agree with the experimental data for selected (n,γ) and (n,f) reactions, and show improvements for high-energy threshold (n,2n) reactions compared to ENDF/B-VII.1.« less

  19. Evaluation of the 239 Pu prompt fission neutron spectrum induced by neutrons of 500 keV and associated covariances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neudecker, D.; Talou, P.; Kawano, T.

    2015-08-01

    We present evaluations of the prompt fission neutron spectrum (PFNS) of (PU)-P-239 induced by 500 keV neutrons, and associated covariances. In a previous evaluation by Talon et al. (2010), surprisingly low evaluated uncertainties were obtained, partly due to simplifying assumptions in the quantification of uncertainties from experiment and model. Therefore, special emphasis is placed here on a thorough uncertainty quantification of experimental data and of the Los Alamos model predicted values entering the evaluation. In addition, the Los Alamos model was extended and an evaluation technique was employed that takes into account the qualitative differences between normalized model predicted valuesmore » and experimental shape data These improvements lead to changes in the evaluated PENS and overall larger evaluated uncertainties than in the previous work. However, these evaluated uncertainties are still smaller than those obtained in a statistical analysis using experimental information only, due to strong model correlations. Hence, suggestions to estimate model defect uncertainties are presented. which lead to more reasonable evaluated uncertainties. The calculated k(eff) of selected criticality benchmarks obtained with these new evaluations agree with each other within their uncertainties despite the different approaches to estimate model defect uncertainties. The k(eff) one standard deviations overlap with some of those obtained using ENDF/B-VILl, albeit their mean values are further away from unity. Spectral indexes for the Jezebel critical assembly calculated with the newly evaluated PFNS agree with the experimental data for selected (n,) and (n,f) reactions, and show improvements for highenergy threshold (n,2n) reactions compared to ENDF/B-VII.l. (C) 2015 Elsevier B.V. All rights reserved.« less

  20. Development of an Uncertainty Quantification Predictive Chemical Reaction Model for Syngas Combustion

    DOE PAGES

    Slavinskaya, N. A.; Abbasi, M.; Starcke, J. H.; ...

    2017-01-24

    An automated data-centric infrastructure, Process Informatics Model (PrIMe), was applied to validation and optimization of a syngas combustion model. The Bound-to-Bound Data Collaboration (B2BDC) module of PrIMe was employed to discover the limits of parameter modifications based on uncertainty quantification (UQ) and consistency analysis of the model–data system and experimental data, including shock-tube ignition delay times and laminar flame speeds. Existing syngas reaction models are reviewed, and the selected kinetic data are described in detail. Empirical rules were developed and applied to evaluate the uncertainty bounds of the literature experimental data. Here, the initial H 2/CO reaction model, assembled frommore » 73 reactions and 17 species, was subjected to a B2BDC analysis. For this purpose, a dataset was constructed that included a total of 167 experimental targets and 55 active model parameters. Consistency analysis of the composed dataset revealed disagreement between models and data. Further analysis suggested that removing 45 experimental targets, 8 of which were self-inconsistent, would lead to a consistent dataset. This dataset was subjected to a correlation analysis, which highlights possible directions for parameter modification and model improvement. Additionally, several methods of parameter optimization were applied, some of them unique to the B2BDC framework. The optimized models demonstrated improved agreement with experiments compared to the initially assembled model, and their predictions for experiments not included in the initial dataset (i.e., a blind prediction) were investigated. The results demonstrate benefits of applying the B2BDC methodology for developing predictive kinetic models.« less

  1. Development of an Uncertainty Quantification Predictive Chemical Reaction Model for Syngas Combustion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slavinskaya, N. A.; Abbasi, M.; Starcke, J. H.

    An automated data-centric infrastructure, Process Informatics Model (PrIMe), was applied to validation and optimization of a syngas combustion model. The Bound-to-Bound Data Collaboration (B2BDC) module of PrIMe was employed to discover the limits of parameter modifications based on uncertainty quantification (UQ) and consistency analysis of the model–data system and experimental data, including shock-tube ignition delay times and laminar flame speeds. Existing syngas reaction models are reviewed, and the selected kinetic data are described in detail. Empirical rules were developed and applied to evaluate the uncertainty bounds of the literature experimental data. Here, the initial H 2/CO reaction model, assembled frommore » 73 reactions and 17 species, was subjected to a B2BDC analysis. For this purpose, a dataset was constructed that included a total of 167 experimental targets and 55 active model parameters. Consistency analysis of the composed dataset revealed disagreement between models and data. Further analysis suggested that removing 45 experimental targets, 8 of which were self-inconsistent, would lead to a consistent dataset. This dataset was subjected to a correlation analysis, which highlights possible directions for parameter modification and model improvement. Additionally, several methods of parameter optimization were applied, some of them unique to the B2BDC framework. The optimized models demonstrated improved agreement with experiments compared to the initially assembled model, and their predictions for experiments not included in the initial dataset (i.e., a blind prediction) were investigated. The results demonstrate benefits of applying the B2BDC methodology for developing predictive kinetic models.« less

  2. Process compensated resonance testing modeling for damage evolution and uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Biedermann, Eric; Heffernan, Julieanne; Mayes, Alexander; Gatewood, Garrett; Jauriqui, Leanne; Goodlet, Brent; Pollock, Tresa; Torbet, Chris; Aldrin, John C.; Mazdiyasni, Siamack

    2017-02-01

    Process Compensated Resonance Testing (PCRT) is a nondestructive evaluation (NDE) method based on the fundamentals of Resonant Ultrasound Spectroscopy (RUS). PCRT is used for material characterization, defect detection, process control and life monitoring of critical gas turbine engine and aircraft components. Forward modeling and model inversion for PCRT have the potential to greatly increase the method's material characterization capability while reducing its dependence on compiling a large population of physical resonance measurements. This paper presents progress on forward modeling studies for damage mechanisms and defects in common to structural materials for gas turbine engines. Finite element method (FEM) models of single crystal (SX) Ni-based superalloy Mar-M247 dog bones and Ti-6Al-4V cylindrical bars were created, and FEM modal analyses calculated the resonance frequencies for the samples in their baseline condition. Then the frequency effects of superalloy creep (high-temperature plastic deformation) and macroscopic texture (preferred crystallographic orientation of grains detrimental to fatigue properties) were evaluated. A PCRT sorting module for creep damage in Mar-M247 was trained with a virtual database made entirely of modeled design points. The sorting module demonstrated successful discrimination of design points with as little as 1% creep strain in the gauge section from a population of acceptable design points with a range of material and geometric variation. The resonance frequency effects of macro-scale texture in Ti-6Al-4V were quantified with forward models of cylinder samples. FEM-based model inversion was demonstrated for Mar-M247 bulk material properties and variations in crystallographic orientation. PCRT uncertainty quantification (UQ) was performed using Monte Carlo studies for Mar-M247 that quantified the overall uncertainty in resonance frequencies resulting from coupled variation in geometry, material properties, crystallographic orientation and creep damage. A model calibration process was also developed that evaluates inversion fitting to differences from a designated reference sample rather than absolute property values, yielding a reduction in fit error.

  3. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    NASA Astrophysics Data System (ADS)

    Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole

    2016-04-01

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.

  4. Bayesian uncertainty quantification in linear models for diffusion MRI.

    PubMed

    Sjölund, Jens; Eklund, Anders; Özarslan, Evren; Herberthson, Magnus; Bånkestad, Maria; Knutsson, Hans

    2018-03-29

    Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue microstructure. By fitting a model to the dMRI signal it is possible to derive various quantitative features. Several of the most popular dMRI signal models are expansions in an appropriately chosen basis, where the coefficients are determined using some variation of least-squares. However, such approaches lack any notion of uncertainty, which could be valuable in e.g. group analyses. In this work, we use a probabilistic interpretation of linear least-squares methods to recast popular dMRI models as Bayesian ones. This makes it possible to quantify the uncertainty of any derived quantity. In particular, for quantities that are affine functions of the coefficients, the posterior distribution can be expressed in closed-form. We simulated measurements from single- and double-tensor models where the correct values of several quantities are known, to validate that the theoretically derived quantiles agree with those observed empirically. We included results from residual bootstrap for comparison and found good agreement. The validation employed several different models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI) and Constrained Spherical Deconvolution (CSD). We also used in vivo data to visualize maps of quantitative features and corresponding uncertainties, and to show how our approach can be used in a group analysis to downweight subjects with high uncertainty. In summary, we convert successful linear models for dMRI signal estimation to probabilistic models, capable of accurate uncertainty quantification. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Investigation of Biotransport in a Tumor With Uncertain Material Properties Using a Nonintrusive Spectral Uncertainty Quantification Method.

    PubMed

    Alexanderian, Alen; Zhu, Liang; Salloum, Maher; Ma, Ronghui; Yu, Meilin

    2017-09-01

    In this study, statistical models are developed for modeling uncertain heterogeneous permeability and porosity in tumors, and the resulting uncertainties in pressure and velocity fields during an intratumoral injection are quantified using a nonintrusive spectral uncertainty quantification (UQ) method. Specifically, the uncertain permeability is modeled as a log-Gaussian random field, represented using a truncated Karhunen-Lòeve (KL) expansion, and the uncertain porosity is modeled as a log-normal random variable. The efficacy of the developed statistical models is validated by simulating the concentration fields with permeability and porosity of different uncertainty levels. The irregularity in the concentration field bears reasonable visual agreement with that in MicroCT images from experiments. The pressure and velocity fields are represented using polynomial chaos (PC) expansions to enable efficient computation of their statistical properties. The coefficients in the PC expansion are computed using a nonintrusive spectral projection method with the Smolyak sparse quadrature. The developed UQ approach is then used to quantify the uncertainties in the random pressure and velocity fields. A global sensitivity analysis is also performed to assess the contribution of individual KL modes of the log-permeability field to the total variance of the pressure field. It is demonstrated that the developed UQ approach can effectively quantify the flow uncertainties induced by uncertain material properties of the tumor.

  6. Data and Model Uncertainties associated with Biogeochemical Groundwater Remediation and their impact on Decision Analysis

    NASA Astrophysics Data System (ADS)

    Pandey, S.; Vesselinov, V. V.; O'Malley, D.; Karra, S.; Hansen, S. K.

    2016-12-01

    Models and data are used to characterize the extent of contamination and remediation, both of which are dependent upon the complex interplay of processes ranging from geochemical reactions, microbial metabolism, and pore-scale mixing to heterogeneous flow and external forcings. Characterization is wrought with important uncertainties related to the model itself (e.g. conceptualization, model implementation, parameter values) and the data used for model calibration (e.g. sparsity, measurement errors). This research consists of two primary components: (1) Developing numerical models that incorporate the complex hydrogeology and biogeochemistry that drive groundwater contamination and remediation; (2) Utilizing novel techniques for data/model-based analyses (such as parameter calibration and uncertainty quantification) to aid in decision support for optimal uncertainty reduction related to characterization and remediation of contaminated sites. The reactive transport models are developed using PFLOTRAN and are capable of simulating a wide range of biogeochemical and hydrologic conditions that affect the migration and remediation of groundwater contaminants under diverse field conditions. Data/model-based analyses are achieved using MADS, which utilizes Bayesian methods and Information Gap theory to address the data/model uncertainties discussed above. We also use these tools to evaluate different models, which vary in complexity, in order to weigh and rank models based on model accuracy (in representation of existing observations), model parsimony (everything else being equal, models with smaller number of model parameters are preferred), and model robustness (related to model predictions of unknown future states). These analyses are carried out on synthetic problems, but are directly related to real-world problems; for example, the modeled processes and data inputs are consistent with the conditions at the Los Alamos National Laboratory contamination sites (RDX and Chromium).

  7. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less

  8. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    DOE PAGES

    Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; ...

    2016-04-01

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less

  9. Physically-based modelling of high magnitude torrent events with uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Wing-Yuen Chow, Candace; Ramirez, Jorge; Zimmermann, Markus; Keiler, Margreth

    2017-04-01

    High magnitude torrent events are associated with the rapid propagation of vast quantities of water and available sediment downslope where human settlements may be established. Assessing the vulnerability of built structures to these events is a part of consequence analysis, where hazard intensity is related to the degree of loss sustained. The specific contribution of the presented work describes a procedure simulate these damaging events by applying physically-based modelling and to include uncertainty information about the simulated results. This is a first step in the development of vulnerability curves based on several intensity parameters (i.e. maximum velocity, sediment deposition depth and impact pressure). The investigation process begins with the collection, organization and interpretation of detailed post-event documentation and photograph-based observation data of affected structures in three sites that exemplify the impact of highly destructive mudflows and flood occurrences on settlements in Switzerland. Hazard intensity proxies are then simulated with the physically-based FLO-2D model (O'Brien et al., 1993). Prior to modelling, global sensitivity analysis is conducted to support a better understanding of model behaviour, parameterization and the quantification of uncertainties (Song et al., 2015). The inclusion of information describing the degree of confidence in the simulated results supports the credibility of vulnerability curves developed with the modelled data. First, key parameters are identified and selected based on literature review. Truncated a priori ranges of parameter values were then defined by expert solicitation. Local sensitivity analysis is performed based on manual calibration to provide an understanding of the parameters relevant to the case studies of interest. Finally, automated parameter estimation is performed to comprehensively search for optimal parameter combinations and associated values, which are evaluated using the observed data collected in the first stage of the investigation. O'Brien, J.S., Julien, P.Y., Fullerton, W. T., 1993. Two-dimensional water flood and mudflow simulation. Journal of Hydraulic Engineering 119(2): 244-261.
 Song, X., Zhang, J., Zhan, C., Xuan, Y., Ye, M., Xu C., 2015. Global sensitivity analysis in hydrological modeling: Review of concepts, methods, theoretical frameworks, Journal of Hydrology 523: 739-757.

  10. Critical aspects of data analysis for quantification in laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Motto-Ros, V.; Syvilay, D.; Bassel, L.; Negre, E.; Trichard, F.; Pelascini, F.; El Haddad, J.; Harhira, A.; Moncayo, S.; Picard, J.; Devismes, D.; Bousquet, B.

    2018-02-01

    In this study, a collaborative contest focused on LIBS data processing has been conducted in an original way since the participants did not share the same samples to be analyzed on their own LIBS experiments but a set of LIBS spectra obtained from one single experiment. Each participant was asked to provide the predicted concentrations of several elements for two glass samples. The analytical contest revealed a wide diversity of results among participants, even when the same spectral lines were considered for the analysis. Then, a parametric study was conducted to investigate the influence of each step during the data processing. This study was based on several analytical figures of merit such as the determination coefficient, uncertainty, limit of quantification and prediction ability (i.e., trueness). Then, it was possible to interpret the results provided by the participants, emphasizing the fact that the type of data extraction, baseline modeling as well as the calibration model play key roles in the quantification performance of the technique. This work provides a set of recommendations based on a systematic evaluation of the quantification procedure with the aim of optimizing the methodological steps toward the standardization of LIBS.

  11. Active Subspaces for Wind Plant Surrogate Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Ryan N; Quick, Julian; Dykes, Katherine L

    Understanding the uncertainty in wind plant performance is crucial to their cost-effective design and operation. However, conventional approaches to uncertainty quantification (UQ), such as Monte Carlo techniques or surrogate modeling, are often computationally intractable for utility-scale wind plants because of poor congergence rates or the curse of dimensionality. In this paper we demonstrate that wind plant power uncertainty can be well represented with a low-dimensional active subspace, thereby achieving a significant reduction in the dimension of the surrogate modeling problem. We apply the active sub-spaces technique to UQ of plant power output with respect to uncertainty in turbine axial inductionmore » factors, and find a single active subspace direction dominates the sensitivity in power output. When this single active subspace direction is used to construct a quadratic surrogate model, the number of model unknowns can be reduced by up to 3 orders of magnitude without compromising performance on unseen test data. We conclude that the dimension reduction achieved with active subspaces makes surrogate-based UQ approaches tractable for utility-scale wind plants.« less

  12. User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Coleman, Kayla; Hooper, Russell W.

    2016-10-04

    In general, Dakota is the Consortium for Advanced Simulation of Light Water Reactors (CASL) delivery vehicle for verification, validation, and uncertainty quantification (VUQ) algorithms. It permits ready application of the VUQ methods described above to simulation codes by CASL researchers, code developers, and application engineers. More specifically, the CASL VUQ Strategy [33] prescribes the use of Predictive Capability Maturity Model (PCMM) assessments [37]. PCMM is an expert elicitation tool designed to characterize and communicate completeness of the approaches used for computational model definition, verification, validation, and uncertainty quantification associated with an intended application. Exercising a computational model with the methodsmore » in Dakota will yield, in part, evidence for a predictive capability maturity model (PCMM) assessment. Table 1.1 summarizes some key predictive maturity related activities (see details in [33]), with examples of how Dakota fits in. This manual offers CASL partners a guide to conducting Dakota-based VUQ studies for CASL problems. It motivates various classes of Dakota methods and includes examples of their use on representative application problems. On reading, a CASL analyst should understand why and how to apply Dakota to a simulation problem.« less

  13. Low order models for uncertainty quantification in acoustic propagation problems

    NASA Astrophysics Data System (ADS)

    Millet, Christophe

    2016-11-01

    Long-range sound propagation problems are characterized by both a large number of length scales and a large number of normal modes. In the atmosphere, these modes are confined within waveguides causing the sound to propagate through multiple paths to the receiver. For uncertain atmospheres, the modes are described as random variables. Concise mathematical models and analysis reveal fundamental limitations in classical projection techniques due to different manifestations of the fact that modes that carry small variance can have important effects on the large variance modes. In the present study, we propose a systematic strategy for obtaining statistically accurate low order models. The normal modes are sorted in decreasing Sobol indices using asymptotic expansions, and the relevant modes are extracted using a modified iterative Krylov-based method. The statistics of acoustic signals are computed by decomposing the original pulse into a truncated sum of modal pulses that can be described by a stationary phase method. As the low-order acoustic model preserves the overall structure of waveforms under perturbations of the atmosphere, it can be applied to uncertainty quantification. The result of this study is a new algorithm which applies on the entire phase space of acoustic fields.

  14. Inverse regression-based uncertainty quantification algorithms for high-dimensional models: Theory and practice

    NASA Astrophysics Data System (ADS)

    Li, Weixuan; Lin, Guang; Li, Bing

    2016-09-01

    Many uncertainty quantification (UQ) approaches suffer from the curse of dimensionality, that is, their computational costs become intractable for problems involving a large number of uncertainty parameters. In these situations, the classic Monte Carlo often remains the preferred method of choice because its convergence rate O (n - 1 / 2), where n is the required number of model simulations, does not depend on the dimension of the problem. However, many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of interest (QoI) is often caused by only a few latent parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace in the statistics literature. Motivated by this observation, we propose two inverse regression-based UQ algorithms (IRUQ) for high-dimensional problems. Both algorithms use inverse regression to convert the original high-dimensional problem to a low-dimensional one, which is then efficiently solved by building a response surface for the reduced model, for example via the polynomial chaos expansion. The first algorithm, which is for the situations where an exact SDR subspace exists, is proved to converge at rate O (n-1), hence much faster than MC. The second algorithm, which doesn't require an exact SDR, employs the reduced model as a control variate to reduce the error of the MC estimate. The accuracy gain could still be significant, depending on how well the reduced model approximates the original high-dimensional one. IRUQ also provides several additional practical advantages: it is non-intrusive; it does not require computing the high-dimensional gradient of the QoI; and it reports an error bar so the user knows how reliable the result is.

  15. Calibration and Propagation of Uncertainty for Independence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, Troy Michael; Kress, Joel David; Bhat, Kabekode Ghanasham

    This document reports on progress and methods for the calibration and uncertainty quantification of the Independence model developed at UT Austin. The Independence model is an advanced thermodynamic and process model framework for piperazine solutions as a high-performance CO 2 capture solvent. Progress is presented in the framework of the CCSI standard basic data model inference framework. Recent work has largely focused on the thermodynamic submodels of Independence.

  16. Modeling and Uncertainty Quantification of Vapor Sorption and Diffusion in Heterogeneous Polymers

    DOE PAGES

    Sun, Yunwei; Harley, Stephen J.; Glascoe, Elizabeth A.

    2015-08-13

    A high-fidelity model of kinetic and equilibrium sorption and diffusion is developed and exercised. The gas-diffusion model is coupled with a triple-sorption mechanism: Henry’s law absorption, Langmuir adsorption, and pooling or clustering of molecules at higher partial pressures. Sorption experiments are conducted and span a range of relative humidities (0-95%) and temperatures (30-60°C). Kinetic and equilibrium sorption properties and effective diffusivity are determined by minimizing the absolute difference between measured and modeled uptakes. Uncertainty quantification and sensitivity analysis methods are described and exercised herein to demonstrate the capability of this modeling approach. Water uptake in silica-filled and unfilled poly(dimethylsiloxane) networksmore » is investigated; however, the model is versatile enough to be used with a wide range of materials and vapors.« less

  17. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    PubMed

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  18. Intrusive Method for Uncertainty Quantification in a Multiphase Flow Solver

    NASA Astrophysics Data System (ADS)

    Turnquist, Brian; Owkes, Mark

    2016-11-01

    Uncertainty quantification (UQ) is a necessary, interesting, and often neglected aspect of fluid flow simulations. To determine the significance of uncertain initial and boundary conditions, a multiphase flow solver is being created which extends a single phase, intrusive, polynomial chaos scheme into multiphase flows. Reliably estimating the impact of input uncertainty on design criteria can help identify and minimize unwanted variability in critical areas, and has the potential to help advance knowledge in atomizing jets, jet engines, pharmaceuticals, and food processing. Use of an intrusive polynomial chaos method has been shown to significantly reduce computational cost over non-intrusive collocation methods such as Monte-Carlo. This method requires transforming the model equations into a weak form through substitution of stochastic (random) variables. Ultimately, the model deploys a stochastic Navier Stokes equation, a stochastic conservative level set approach including reinitialization, as well as stochastic normals and curvature. By implementing these approaches together in one framework, basic problems may be investigated which shed light on model expansion, uncertainty theory, and fluid flow in general. NSF Grant Number 1511325.

  19. A fault tree model to assess probability of contaminant discharge from shipwrecks.

    PubMed

    Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M; Lindgren, J F; Dahllöf, I

    2014-11-15

    Shipwrecks on the sea floor around the world may contain hazardous substances that can cause harm to the marine environment. Today there are no comprehensive methods for environmental risk assessment of shipwrecks, and thus there is poor support for decision-making on prioritization of mitigation measures. The purpose of this study was to develop a tool for quantitative risk estimation of potentially polluting shipwrecks, and in particular an estimation of the annual probability of hazardous substance discharge. The assessment of the probability of discharge is performed using fault tree analysis, facilitating quantification of the probability with respect to a set of identified hazardous events. This approach enables a structured assessment providing transparent uncertainty and sensitivity analyses. The model facilitates quantification of risk, quantification of the uncertainties in the risk calculation and identification of parameters to be investigated further in order to obtain a more reliable risk calculation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Multi-fidelity numerical simulations of shock/turbulent-boundary layer interaction with uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Bermejo-Moreno, Ivan; Campo, Laura; Larsson, Johan; Emory, Mike; Bodart, Julien; Palacios, Francisco; Iaccarino, Gianluca; Eaton, John

    2013-11-01

    We study the interaction between an oblique shock wave and the turbulent boundary layers inside a nearly-square duct by combining wall-modeled LES, 2D and 3D RANS simulations, targeting the experiment of Campo, Helmer & Eaton, 2012 (nominal conditions: M = 2 . 05 , Reθ = 6 , 500). A primary objective is to quantify the effect of aleatory and epistemic uncertainties on the STBLI. Aleatory uncertainties considered include the inflow conditions (Mach number of the incoming air stream and thickness of the boundary layers) and perturbations of the duct geometry upstream of the interaction. The epistemic uncertainty under consideration focuses on the RANS turbulence model form by injecting perturbations in the Reynolds stress anisotropy in regions of the flow where the model assumptions (in particular, the Boussinesq eddy-viscosity hypothesis) may be invalid. These perturbations are then propagated through the flow solver into the solution. The uncertainty quantification (UQ) analysis is done through 2D and 3D RANS simulations, assessing the importance of the three-dimensional effects imposed by the nearly-square duct geometry. Wall-modeled LES are used to verify elements of the UQ methodology and to explore the flow features and physics of the STBLI for multiple shock strengths. Financial support from the United States Department of Energy under the PSAAP program is gratefully acknowledged.

  1. Uncertainty Quantification in Alchemical Free Energy Methods.

    PubMed

    Bhati, Agastya P; Wan, Shunzhou; Hu, Yuan; Sherborne, Brad; Coveney, Peter V

    2018-06-12

    Alchemical free energy methods have gained much importance recently from several reports of improved ligand-protein binding affinity predictions based on their implementation using molecular dynamics simulations. A large number of variants of such methods implementing different accelerated sampling techniques and free energy estimators are available, each claimed to be better than the others in its own way. However, the key features of reproducibility and quantification of associated uncertainties in such methods have barely been discussed. Here, we apply a systematic protocol for uncertainty quantification to a number of popular alchemical free energy methods, covering both absolute and relative free energy predictions. We show that a reliable measure of error estimation is provided by ensemble simulation-an ensemble of independent MD simulations-which applies irrespective of the free energy method. The need to use ensemble methods is fundamental and holds regardless of the duration of time of the molecular dynamics simulations performed.

  2. Particle image velocimetry correlation signal-to-noise ratio metrics and measurement uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Xue, Zhenyu; Charonko, John J.; Vlachos, Pavlos P.

    2014-11-01

    In particle image velocimetry (PIV) the measurement signal is contained in the recorded intensity of the particle image pattern superimposed on a variety of noise sources. The signal-to-noise-ratio (SNR) strength governs the resulting PIV cross correlation and ultimately the accuracy and uncertainty of the resulting PIV measurement. Hence we posit that correlation SNR metrics calculated from the correlation plane can be used to quantify the quality of the correlation and the resulting uncertainty of an individual measurement. In this paper we extend the original work by Charonko and Vlachos and present a framework for evaluating the correlation SNR using a set of different metrics, which in turn are used to develop models for uncertainty estimation. Several corrections have been applied in this work. The SNR metrics and corresponding models presented herein are expanded to be applicable to both standard and filtered correlations by applying a subtraction of the minimum correlation value to remove the effect of the background image noise. In addition, the notion of a ‘valid’ measurement is redefined with respect to the correlation peak width in order to be consistent with uncertainty quantification principles and distinct from an ‘outlier’ measurement. Finally the type and significance of the error distribution function is investigated. These advancements lead to more robust and reliable uncertainty estimation models compared with the original work by Charonko and Vlachos. The models are tested against both synthetic benchmark data as well as experimental measurements. In this work, {{U}68.5} uncertainties are estimated at the 68.5% confidence level while {{U}95} uncertainties are estimated at 95% confidence level. For all cases the resulting calculated coverage factors approximate the expected theoretical confidence intervals, thus demonstrating the applicability of these new models for estimation of uncertainty for individual PIV measurements.

  3. ARM Best Estimate Data (ARMBE) Products for Climate Science for a Sustainable Energy Future (CSSEF)

    DOE Data Explorer

    Riihimaki, Laura; Gaustad, Krista; McFarlane, Sally

    2014-06-12

    This data set was created for the Climate Science for a Sustainable Energy Future (CSSEF) model testbed project and is an extension of the hourly average ARMBE dataset to other extended facility sites and to include uncertainty estimates. Uncertainty estimates were needed in order to use uncertainty quantification (UQ) techniques with the data.

  4. Final Technical Report: Mathematical Foundations for Uncertainty Quantification in Materials Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plechac, Petr; Vlachos, Dionisios G.

    We developed path-wise information theory-based and goal-oriented sensitivity analysis and parameter identification methods for complex high-dimensional dynamics and in particular of non-equilibrium extended molecular systems. The combination of these novel methodologies provided the first methods in the literature which are capable to handle UQ questions for stochastic complex systems with some or all of the following features: (a) multi-scale stochastic models such as (bio)chemical reaction networks, with a very large number of parameters, (b) spatially distributed systems such as Kinetic Monte Carlo or Langevin Dynamics, (c) non-equilibrium processes typically associated with coupled physico-chemical mechanisms, driven boundary conditions, hybrid micro-macro systems,more » etc. A particular computational challenge arises in simulations of multi-scale reaction networks and molecular systems. Mathematical techniques were applied to in silico prediction of novel materials with emphasis on the effect of microstructure on model uncertainty quantification (UQ). We outline acceleration methods to make calculations of real chemistry feasible followed by two complementary tasks on structure optimization and microstructure-induced UQ.« less

  5. Physical Uncertainty Bounds (PUB)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaughan, Diane Elizabeth; Preston, Dean L.

    2015-03-19

    This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switchingmore » out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.« less

  6. Uncertainty quantification of Antarctic contribution to sea-level rise using the fast Elementary Thermomechanical Ice Sheet (f.ETISh) model

    NASA Astrophysics Data System (ADS)

    Bulthuis, Kevin; Arnst, Maarten; Pattyn, Frank; Favier, Lionel

    2017-04-01

    Uncertainties in sea-level rise projections are mostly due to uncertainties in Antarctic ice-sheet predictions (IPCC AR5 report, 2013), because key parameters related to the current state of the Antarctic ice sheet (e.g. sub-ice-shelf melting) and future climate forcing are poorly constrained. Here, we propose to improve the predictions of Antarctic ice-sheet behaviour using new uncertainty quantification methods. As opposed to ensemble modelling (Bindschadler et al., 2013) which provides a rather limited view on input and output dispersion, new stochastic methods (Le Maître and Knio, 2010) can provide deeper insight into the impact of uncertainties on complex system behaviour. Such stochastic methods usually begin with deducing a probabilistic description of input parameter uncertainties from the available data. Then, the impact of these input parameter uncertainties on output quantities is assessed by estimating the probability distribution of the outputs by means of uncertainty propagation methods such as Monte Carlo methods or stochastic expansion methods. The use of such uncertainty propagation methods in glaciology may be computationally costly because of the high computational complexity of ice-sheet models. This challenge emphasises the importance of developing reliable and computationally efficient ice-sheet models such as the f.ETISh ice-sheet model (Pattyn, 2015), a new fast thermomechanical coupled ice sheet/ice shelf model capable of handling complex and critical processes such as the marine ice-sheet instability mechanism. Here, we apply these methods to investigate the role of uncertainties in sub-ice-shelf melting, calving rates and climate projections in assessing Antarctic contribution to sea-level rise for the next centuries using the f.ETISh model. We detail the methods and show results that provide nominal values and uncertainty bounds for future sea-level rise as a reflection of the impact of the input parameter uncertainties under consideration, as well as a ranking of the input parameter uncertainties in the order of the significance of their contribution to uncertainty in future sea-level rise. In addition, we discuss how limitations posed by the available information (poorly constrained data) pose challenges that motivate our current research.

  7. Uncertainty quantification of fast sodium current steady-state inactivation for multi-scale models of cardiac electrophysiology.

    PubMed

    Pathmanathan, Pras; Shotwell, Matthew S; Gavaghan, David J; Cordeiro, Jonathan M; Gray, Richard A

    2015-01-01

    Perhaps the most mature area of multi-scale systems biology is the modelling of the heart. Current models are grounded in over fifty years of research in the development of biophysically detailed models of the electrophysiology (EP) of cardiac cells, but one aspect which is inadequately addressed is the incorporation of uncertainty and physiological variability. Uncertainty quantification (UQ) is the identification and characterisation of the uncertainty in model parameters derived from experimental data, and the computation of the resultant uncertainty in model outputs. It is a necessary tool for establishing the credibility of computational models, and will likely be expected of EP models for future safety-critical clinical applications. The focus of this paper is formal UQ of one major sub-component of cardiac EP models, the steady-state inactivation of the fast sodium current, INa. To better capture average behaviour and quantify variability across cells, we have applied for the first time an 'individual-based' statistical methodology to assess voltage clamp data. Advantages of this approach over a more traditional 'population-averaged' approach are highlighted. The method was used to characterise variability amongst cells isolated from canine epi and endocardium, and this variability was then 'propagated forward' through a canine model to determine the resultant uncertainty in model predictions at different scales, such as of upstroke velocity and spiral wave dynamics. Statistically significant differences between epi and endocardial cells (greater half-inactivation and less steep slope of steady state inactivation curve for endo) was observed, and the forward propagation revealed a lack of robustness of the model to underlying variability, but also surprising robustness to variability at the tissue scale. Overall, the methodology can be used to: (i) better analyse voltage clamp data; (ii) characterise underlying population variability; (iii) investigate consequences of variability; and (iv) improve the ability to validate a model. To our knowledge this article is the first to quantify population variability in membrane dynamics in this manner, and the first to perform formal UQ for a component of a cardiac model. The approach is likely to find much wider applicability across systems biology as current application domains reach greater levels of maturity. Published by Elsevier Ltd.

  8. A Fatigue Crack Size Evaluation Method Based on Lamb Wave Simulation and Limited Experimental Data

    PubMed Central

    He, Jingjing; Ran, Yunmeng; Liu, Bin; Yang, Jinsong; Guan, Xuefei

    2017-01-01

    This paper presents a systematic and general method for Lamb wave-based crack size quantification using finite element simulations and Bayesian updating. The method consists of construction of a baseline quantification model using finite element simulation data and Bayesian updating with limited Lamb wave data from target structure. The baseline model correlates two proposed damage sensitive features, namely the normalized amplitude and phase change, with the crack length through a response surface model. The two damage sensitive features are extracted from the first received S0 mode wave package. The model parameters of the baseline model are estimated using finite element simulation data. To account for uncertainties from numerical modeling, geometry, material and manufacturing between the baseline model and the target model, Bayesian method is employed to update the baseline model with a few measurements acquired from the actual target structure. A rigorous validation is made using in-situ fatigue testing and Lamb wave data from coupon specimens and realistic lap-joint components. The effectiveness and accuracy of the proposed method is demonstrated under different loading and damage conditions. PMID:28902148

  9. Applied Mathematics in EM Studies with Special Emphasis on an Uncertainty Quantification and 3-D Integral Equation Modelling

    NASA Astrophysics Data System (ADS)

    Pankratov, Oleg; Kuvshinov, Alexey

    2016-01-01

    Despite impressive progress in the development and application of electromagnetic (EM) deterministic inverse schemes to map the 3-D distribution of electrical conductivity within the Earth, there is one question which remains poorly addressed—uncertainty quantification of the recovered conductivity models. Apparently, only an inversion based on a statistical approach provides a systematic framework to quantify such uncertainties. The Metropolis-Hastings (M-H) algorithm is the most popular technique for sampling the posterior probability distribution that describes the solution of the statistical inverse problem. However, all statistical inverse schemes require an enormous amount of forward simulations and thus appear to be extremely demanding computationally, if not prohibitive, if a 3-D set up is invoked. This urges development of fast and scalable 3-D modelling codes which can run large-scale 3-D models of practical interest for fractions of a second on high-performance multi-core platforms. But, even with these codes, the challenge for M-H methods is to construct proposal functions that simultaneously provide a good approximation of the target density function while being inexpensive to be sampled. In this paper we address both of these issues. First we introduce a variant of the M-H method which uses information about the local gradient and Hessian of the penalty function. This, in particular, allows us to exploit adjoint-based machinery that has been instrumental for the fast solution of deterministic inverse problems. We explain why this modification of M-H significantly accelerates sampling of the posterior probability distribution. In addition we show how Hessian handling (inverse, square root) can be made practicable by a low-rank approximation using the Lanczos algorithm. Ultimately we discuss uncertainty analysis based on stochastic inversion results. In addition, we demonstrate how this analysis can be performed within a deterministic approach. In the second part, we summarize modern trends in the development of efficient 3-D EM forward modelling schemes with special emphasis on recent advances in the integral equation approach.

  10. Framing of Uncertainty in Scientific Publications: Towards Recommendations for Decision Support

    NASA Astrophysics Data System (ADS)

    Guillaume, J. H. A.; Helgeson, C.; Elsawah, S.; Jakeman, A. J.; Kummu, M.

    2016-12-01

    Uncertainty is recognised as an essential issue in environmental decision making and decision support. As modellers, we notably use a variety of tools and techniques within an analysis, for example related to uncertainty quantification and model validation. We also address uncertainty by how we present results. For example, experienced modellers are careful to distinguish robust conclusions from those that need further work, and the precision of quantitative results is tailored to their accuracy. In doing so, the modeller frames how uncertainty should be interpreted by their audience. This is an area which extends beyond modelling to fields such as philosophy of science, semantics, discourse analysis, intercultural communication and rhetoric. We propose that framing of uncertainty deserves greater attention in the context of decision support, and that there are opportunities in this area for fundamental research, synthesis and knowledge transfer, development of teaching curricula, and significant advances in managing uncertainty in decision making. This presentation reports preliminary results of a study of framing practices. Specifically, we analyse the framing of uncertainty that is visible in the abstracts from a corpus of scientific articles. We do this through textual analysis of the content and structure of those abstracts. Each finding that appears in an abstract is classified according to the uncertainty framing approach used, using a classification scheme that was iteratively revised based on reflection and comparison amongst three coders. This analysis indicates how frequently the different framing approaches are used, and provides initial insights into relationships between frames, how the frames relate to interpretation of uncertainty, and how rhetorical devices are used by modellers to communicate uncertainty in their work. We propose initial hypotheses for how the resulting insights might influence decision support, and help advance decision making to better address uncertainty.

  11. A Generalized Perturbation Theory Solver In Rattlesnake Based On PETSc With Application To TREAT Steady State Uncertainty Quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schunert, Sebastian; Wang, Congjian; Wang, Yaqi

    Rattlesnake and MAMMOTH are the designated TREAT analysis tools currently being developed at the Idaho National Laboratory. Concurrent with development of the multi-physics, multi-scale capabilities, sensitivity analysis and uncertainty quantification (SA/UQ) capabilities are required for predicitive modeling of the TREAT reactor. For steady-state SA/UQ, that is essential for setting initial conditions for the transients, generalized perturbation theory (GPT) will be used. This work describes the implementation of a PETSc based solver for the generalized adjoint equations that constitute a inhomogeneous, rank deficient problem. The standard approach is to use an outer iteration strategy with repeated removal of the fundamental modemore » contamination. The described GPT algorithm directly solves the GPT equations without the need of an outer iteration procedure by using Krylov subspaces that are orthogonal to the operator’s nullspace. Three test problems are solved and provide sufficient verification for the Rattlesnake’s GPT capability. We conclude with a preliminary example evaluating the impact of the Boron distribution in the TREAT reactor using perturbation theory.« less

  12. Predictive uncertainty analysis of plume distribution for geological carbon sequestration using sparse-grid Bayesian method

    NASA Astrophysics Data System (ADS)

    Shi, X.; Zhang, G.

    2013-12-01

    Because of the extensive computational burden, parametric uncertainty analyses are rarely conducted for geological carbon sequestration (GCS) process based multi-phase models. The difficulty of predictive uncertainty analysis for the CO2 plume migration in realistic GCS models is not only due to the spatial distribution of the caprock and reservoir (i.e. heterogeneous model parameters), but also because the GCS optimization estimation problem has multiple local minima due to the complex nonlinear multi-phase (gas and aqueous), and multi-component (water, CO2, salt) transport equations. The geological model built by Doughty and Pruess (2004) for the Frio pilot site (Texas) was selected and assumed to represent the 'true' system, which was composed of seven different facies (geological units) distributed among 10 layers. We chose to calibrate the permeabilities of these facies. Pressure and gas saturation values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. Each simulation of the model lasts about 2 hours. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid stochastic collocation method. This surrogate response surface global optimization algorithm is firstly used to calibrate the model parameters, then prediction uncertainty of the CO2 plume position is quantified due to the propagation from parametric uncertainty in the numerical experiments, which is also compared to the actual plume from the 'true' model. Results prove that the approach is computationally efficient for multi-modal optimization and prediction uncertainty quantification for computationally expensive simulation models. Both our inverse methodology and findings can be broadly applicable to GCS in heterogeneous storage formations.

  13. A statistical kinematic source inversion approach based on the QUESO library for uncertainty quantification and prediction

    NASA Astrophysics Data System (ADS)

    Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo

    2014-05-01

    Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.

  14. Uncertainty quantification in flux balance analysis of spatially lumped and distributed models of neuron-astrocyte metabolism.

    PubMed

    Calvetti, Daniela; Cheng, Yougan; Somersalo, Erkki

    2016-12-01

    Identifying feasible steady state solutions of a brain energy metabolism model is an inverse problem that allows infinitely many solutions. The characterization of the non-uniqueness, or the uncertainty quantification of the flux balance analysis, is tantamount to identifying the degrees of freedom of the solution. The degrees of freedom of multi-compartment mathematical models for energy metabolism of a neuron-astrocyte complex may offer a key to understand the different ways in which the energetic needs of the brain are met. In this paper we study the uncertainty in the solution, using techniques of linear algebra to identify the degrees of freedom in a lumped model, and Markov chain Monte Carlo methods in its extension to a spatially distributed case. The interpretation of the degrees of freedom in metabolic terms, more specifically, glucose and oxygen partitioning, is then leveraged to derive constraints on the free parameters to guarantee that the model is energetically feasible. We demonstrate how the model can be used to estimate the stoichiometric energy needs of the cells as well as the household energy based on the measured oxidative cerebral metabolic rate of glucose and glutamate cycling. Moreover, our analysis shows that in the lumped model the net direction of lactate dehydrogenase (LDH) in the cells can be deduced from the glucose partitioning between the compartments. The extension of the lumped model to a spatially distributed multi-compartment setting that includes diffusion fluxes from capillary to tissue increases the number of degrees of freedom, requiring the use of statistical sampling techniques. The analysis of the distributed model reveals that some of the conclusions valid for the spatially lumped model, e.g., concerning the LDH activity and glucose partitioning, may no longer hold.

  15. Multi-Scale Validation of a Nanodiamond Drug Delivery System and Multi-Scale Engineering Education

    ERIC Educational Resources Information Center

    Schwalbe, Michelle Kristin

    2010-01-01

    This dissertation has two primary concerns: (i) evaluating the uncertainty and prediction capabilities of a nanodiamond drug delivery model using Bayesian calibration and bias correction, and (ii) determining conceptual difficulties of multi-scale analysis from an engineering education perspective. A Bayesian uncertainty quantification scheme…

  16. On combination of strict Bayesian principles with model reduction technique or how stochastic model calibration can become feasible for large-scale applications

    NASA Astrophysics Data System (ADS)

    Oladyshkin, S.; Schroeder, P.; Class, H.; Nowak, W.

    2013-12-01

    Predicting underground carbon dioxide (CO2) storage represents a challenging problem in a complex dynamic system. Due to lacking information about reservoir parameters, quantification of uncertainties may become the dominant question in risk assessment. Calibration on past observed data from pilot-scale test injection can improve the predictive power of the involved geological, flow, and transport models. The current work performs history matching to pressure time series from a pilot storage site operated in Europe, maintained during an injection period. Simulation of compressible two-phase flow and transport (CO2/brine) in the considered site is computationally very demanding, requiring about 12 days of CPU time for an individual model run. For that reason, brute-force approaches for calibration are not feasible. In the current work, we explore an advanced framework for history matching based on the arbitrary polynomial chaos expansion (aPC) and strict Bayesian principles. The aPC [1] offers a drastic but accurate stochastic model reduction. Unlike many previous chaos expansions, it can handle arbitrary probability distribution shapes of uncertain parameters, and can therefore handle directly the statistical information appearing during the matching procedure. We capture the dependence of model output on these multipliers with the expansion-based reduced model. In our study we keep the spatial heterogeneity suggested by geophysical methods, but consider uncertainty in the magnitude of permeability trough zone-wise permeability multipliers. Next combined the aPC with Bootstrap filtering (a brute-force but fully accurate Bayesian updating mechanism) in order to perform the matching. In comparison to (Ensemble) Kalman Filters, our method accounts for higher-order statistical moments and for the non-linearity of both the forward model and the inversion, and thus allows a rigorous quantification of calibrated model uncertainty. The usually high computational costs of accurate filtering become very feasible for our suggested aPC-based calibration framework. However, the power of aPC-based Bayesian updating strongly depends on the accuracy of prior information. In the current study, the prior assumptions on the model parameters were not satisfactory and strongly underestimate the reservoir pressure. Thus, the aPC-based response surface used in Bootstrap filtering is fitted to a distant and poorly chosen region within the parameter space. Thanks to the iterative procedure suggested in [2] we overcome this drawback with small computational costs. The iteration successively improves the accuracy of the expansion around the current estimation of the posterior distribution. The final result is a calibrated model of the site that can be used for further studies, with an excellent match to the data. References [1] Oladyshkin S. and Nowak W. Data-driven uncertainty quantification using the arbitrary polynomial chaos expansion. Reliability Engineering and System Safety, 106:179-190, 2012. [2] Oladyshkin S., Class H., Nowak W. Bayesian updating via Bootstrap filtering combined with data-driven polynomial chaos expansions: methodology and application to history matching for carbon dioxide storage in geological formations. Computational Geosciences, 17 (4), 671-687, 2013.

  17. Theoretical analysis on the measurement errors of local 2D DIC: Part I temporal and spatial uncertainty quantification of displacement measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yueqi; Lava, Pascal; Reu, Phillip

    This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.

  18. Theoretical analysis on the measurement errors of local 2D DIC: Part I temporal and spatial uncertainty quantification of displacement measurements

    DOE PAGES

    Wang, Yueqi; Lava, Pascal; Reu, Phillip; ...

    2015-12-23

    This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.

  19. Development of Flight-Test Performance Estimation Techniques for Small Unmanned Aerial Systems

    NASA Astrophysics Data System (ADS)

    McCrink, Matthew Henry

    This dissertation provides a flight-testing framework for assessing the performance of fixed-wing, small-scale unmanned aerial systems (sUAS) by leveraging sub-system models of components unique to these vehicles. The development of the sub-system models, and their links to broader impacts on sUAS performance, is the key contribution of this work. The sub-system modeling and analysis focuses on the vehicle's propulsion, navigation and guidance, and airframe components. Quantification of the uncertainty in the vehicle's power available and control states is essential for assessing the validity of both the methods and results obtained from flight-tests. Therefore, detailed propulsion and navigation system analyses are presented to validate the flight testing methodology. Propulsion system analysis required the development of an analytic model of the propeller in order to predict the power available over a range of flight conditions. The model is based on the blade element momentum (BEM) method. Additional corrections are added to the basic model in order to capture the Reynolds-dependent scale effects unique to sUAS. The model was experimentally validated using a ground based testing apparatus. The BEM predictions and experimental analysis allow for a parameterized model relating the electrical power, measurable during flight, to the power available required for vehicle performance analysis. Navigation system details are presented with a specific focus on the sensors used for state estimation, and the resulting uncertainty in vehicle state. Uncertainty quantification is provided by detailed calibration techniques validated using quasi-static and hardware-in-the-loop (HIL) ground based testing. The HIL methods introduced use a soft real-time flight simulator to provide inertial quality data for assessing overall system performance. Using this tool, the uncertainty in vehicle state estimation based on a range of sensors, and vehicle operational environments is presented. The propulsion and navigation system models are used to evaluate flight-testing methods for evaluating fixed-wing sUAS performance. A brief airframe analysis is presented to provide a foundation for assessing the efficacy of the flight-test methods. The flight-testing presented in this work is focused on validating the aircraft drag polar, zero-lift drag coefficient, and span efficiency factor. Three methods are detailed and evaluated for estimating these design parameters. Specific focus is placed on the influence of propulsion and navigation system uncertainty on the resulting performance data. Performance estimates are used in conjunction with the propulsion model to estimate the impact sensor and measurement uncertainty on the endurance and range of a fixed-wing sUAS. Endurance and range results for a simplistic power available model are compared to the Reynolds-dependent model presented in this work. Additional parameter sensitivity analysis related to state estimation uncertainties encountered in flight-testing are presented. Results from these analyses indicate that the sub-system models introduced in this work are of first-order importance, on the order of 5-10% change in range and endurance, in assessing the performance of a fixed-wing sUAS.

  20. Uncertainty Quantification in Climate Modeling and Projection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qian, Yun; Jackson, Charles; Giorgi, Filippo

    The projection of future climate is one of the most complex problems undertaken by the scientific community. Although scientists have been striving to better understand the physical basis of the climate system and to improve climate models, the overall uncertainty in projections of future climate has not been significantly reduced (e.g., from the IPCC AR4 to AR5). With the rapid increase of complexity in Earth system models, reducing uncertainties in climate projections becomes extremely challenging. Since uncertainties always exist in climate models, interpreting the strengths and limitations of future climate projections is key to evaluating risks, and climate change informationmore » for use in Vulnerability, Impact, and Adaptation (VIA) studies should be provided with both well-characterized and well-quantified uncertainty. The workshop aimed at providing participants, many of them from developing countries, information on strategies to quantify the uncertainty in climate model projections and assess the reliability of climate change information for decision-making. The program included a mixture of lectures on fundamental concepts in Bayesian inference and sampling, applications, and hands-on computer laboratory exercises employing software packages for Bayesian inference, Markov Chain Monte Carlo methods, and global sensitivity analyses. The lectures covered a range of scientific issues underlying the evaluation of uncertainties in climate projections, such as the effects of uncertain initial and boundary conditions, uncertain physics, and limitations of observational records. Progress in quantitatively estimating uncertainties in hydrologic, land surface, and atmospheric models at both regional and global scales was also reviewed. The application of Uncertainty Quantification (UQ) concepts to coupled climate system models is still in its infancy. The Coupled Model Intercomparison Project (CMIP) multi-model ensemble currently represents the primary data for assessing reliability and uncertainties of climate change information. An alternative approach is to generate similar ensembles by perturbing parameters within a single-model framework. One of workshop’s objectives was to give participants a deeper understanding of these approaches within a Bayesian statistical framework. However, there remain significant challenges still to be resolved before UQ can be applied in a convincing way to climate models and their projections.« less

  1. Evaluation of global fine-resolution precipitation products and their uncertainty quantification in ensemble discharge simulations

    NASA Astrophysics Data System (ADS)

    Qi, W.; Zhang, C.; Fu, G.; Sweetapple, C.; Zhou, H.

    2016-02-01

    The applicability of six fine-resolution precipitation products, including precipitation radar, infrared, microwave and gauge-based products, using different precipitation computation recipes, is evaluated using statistical and hydrological methods in northeastern China. In addition, a framework quantifying uncertainty contributions of precipitation products, hydrological models, and their interactions to uncertainties in ensemble discharges is proposed. The investigated precipitation products are Tropical Rainfall Measuring Mission (TRMM) products (TRMM3B42 and TRMM3B42RT), Global Land Data Assimilation System (GLDAS)/Noah, Asian Precipitation - Highly-Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE), Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN), and a Global Satellite Mapping of Precipitation (GSMAP-MVK+) product. Two hydrological models of different complexities, i.e. a water and energy budget-based distributed hydrological model and a physically based semi-distributed hydrological model, are employed to investigate the influence of hydrological models on simulated discharges. Results show APHRODITE has high accuracy at a monthly scale compared with other products, and GSMAP-MVK+ shows huge advantage and is better than TRMM3B42 in relative bias (RB), Nash-Sutcliffe coefficient of efficiency (NSE), root mean square error (RMSE), correlation coefficient (CC), false alarm ratio, and critical success index. These findings could be very useful for validation, refinement, and future development of satellite-based products (e.g. NASA Global Precipitation Measurement). Although large uncertainty exists in heavy precipitation, hydrological models contribute most of the uncertainty in extreme discharges. Interactions between precipitation products and hydrological models can have the similar magnitude of contribution to discharge uncertainty as the hydrological models. A better precipitation product does not guarantee a better discharge simulation because of interactions. It is also found that a good discharge simulation depends on a good coalition of a hydrological model and a precipitation product, suggesting that, although the satellite-based precipitation products are not as accurate as the gauge-based products, they could have better performance in discharge simulations when appropriately combined with hydrological models. This information is revealed for the first time and very beneficial for precipitation product applications.

  2. Uncertainty quantification-based robust aerodynamic optimization of laminar flow nacelle

    NASA Astrophysics Data System (ADS)

    Xiong, Neng; Tao, Yang; Liu, Zhiyong; Lin, Jun

    2018-05-01

    The aerodynamic performance of laminar flow nacelle is highly sensitive to uncertain working conditions, especially the surface roughness. An efficient robust aerodynamic optimization method on the basis of non-deterministic computational fluid dynamic (CFD) simulation and Efficient Global Optimization (EGO)algorithm was employed. A non-intrusive polynomial chaos method is used in conjunction with an existing well-verified CFD module to quantify the uncertainty propagation in the flow field. This paper investigates the roughness modeling behavior with the γ-Ret shear stress transport model including modeling flow transition and surface roughness effects. The roughness effects are modeled to simulate sand grain roughness. A Class-Shape Transformation-based parametrical description of the nacelle contour as part of an automatic design evaluation process is presented. A Design-of-Experiments (DoE) was performed and surrogate model by Kriging method was built. The new design nacelle process demonstrates that significant improvements of both mean and variance of the efficiency are achieved and the proposed method can be applied to laminar flow nacelle design successfully.

  3. Multivariate Geostatistical Analysis of Uncertainty for the Hydrodynamic Model of a Geological Trap for Carbon Dioxide Storage. Case study: Multilayered Geological Structure Vest Valcele, ROMANIA

    NASA Astrophysics Data System (ADS)

    Scradeanu, D.; Pagnejer, M.

    2012-04-01

    The purpose of the works is to evaluate the uncertainty of the hydrodynamic model for a multilayered geological structure, a potential trap for carbon dioxide storage. The hydrodynamic model is based on a conceptual model of the multilayered hydrostructure with three components: 1) spatial model; 2) parametric model and 3) energy model. The necessary data to achieve the three components of the conceptual model are obtained from: 240 boreholes explored by geophysical logging and seismic investigation, for the first two components, and an experimental water injection test for the last one. The hydrodinamic model is a finite difference numerical model based on a 3D stratigraphic model with nine stratigraphic units (Badenian and Oligocene) and a 3D multiparameter model (porosity, permeability, hydraulic conductivity, storage coefficient, leakage etc.). The uncertainty of the two 3D models was evaluated using multivariate geostatistical tools: a)cross-semivariogram for structural analysis, especially the study of anisotropy and b)cokriging to reduce estimation variances in a specific situation where is a cross-correlation between a variable and one or more variables that are undersampled. It has been identified important differences between univariate and bivariate anisotropy. The minimised uncertainty of the parametric model (by cokriging) was transferred to hydrodynamic model. The uncertainty distribution of the pressures generated by the water injection test has been additional filtered by the sensitivity of the numerical model. The obtained relative errors of the pressure distribution in the hydrodynamic model are 15-20%. The scientific research was performed in the frame of the European FP7 project "A multiple space and time scale approach for the quantification of deep saline formation for CO2 storage(MUSTANG)".

  4. Uncertainties in the Antarctic Ice Sheet Contribution to Sea Level Rise: Exploration of Model Response to Errors in Climate Forcing, Boundary Conditions, and Internal Parameters

    NASA Astrophysics Data System (ADS)

    Schlegel, N.; Seroussi, H. L.; Boening, C.; Larour, E. Y.; Limonadi, D.; Schodlok, M.; Watkins, M. M.

    2017-12-01

    The Jet Propulsion Laboratory-University of California at Irvine Ice Sheet System Model (ISSM) is a thermo-mechanical 2D/3D parallelized finite element software used to physically model the continental-scale flow of ice at high resolutions. Embedded into ISSM are uncertainty quantification (UQ) tools, based on the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA) software. ISSM-DAKOTA offers various UQ methods for the investigation of how errors in model input impact uncertainty in simulation results. We utilize these tools to regionally sample model input and key parameters, based on specified bounds of uncertainty, and run a suite of continental-scale 100-year ISSM forward simulations of the Antarctic Ice Sheet. Resulting diagnostics (e.g., spread in local mass flux and regional mass balance) inform our conclusion about which parameters and/or forcing has the greatest impact on century-scale model simulations of ice sheet evolution. The results allow us to prioritize the key datasets and measurements that are critical for the minimization of ice sheet model uncertainty. Overall, we find that Antartica's total sea level contribution is strongly affected by grounding line retreat, which is driven by the magnitude of ice shelf basal melt rates and by errors in bedrock topography. In addition, results suggest that after 100 years of simulation, Thwaites glacier is the most significant source of model uncertainty, and its drainage basin has the largest potential for future sea level contribution. This work is performed at and supported by the California Institute of Technology's Jet Propulsion Laboratory. Supercomputing time is also supported through a contract with the National Aeronautics and Space Administration's Cryosphere program.

  5. Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion

    NASA Astrophysics Data System (ADS)

    Li, Z.; Ghaith, M.

    2017-12-01

    Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.

  6. Preliminary Climate Uncertainty Quantification Study on Model-Observation Test Beds at Earth Systems Grid Federation Repository

    NASA Astrophysics Data System (ADS)

    Lin, G.; Stephan, E.; Elsethagen, T.; Meng, D.; Riihimaki, L. D.; McFarlane, S. A.

    2012-12-01

    Uncertainty quantification (UQ) is the science of quantitative characterization and reduction of uncertainties in applications. It determines how likely certain outcomes are if some aspects of the system are not exactly known. UQ studies such as the atmosphere datasets greatly increased in size and complexity because they now comprise of additional complex iterative steps, involve numerous simulation runs and can consist of additional analytical products such as charts, reports, and visualizations to explain levels of uncertainty. These new requirements greatly expand the need for metadata support beyond the NetCDF convention and vocabulary and as a result an additional formal data provenance ontology is required to provide a historical explanation of the origin of the dataset that include references between the explanations and components within the dataset. This work shares a climate observation data UQ science use case and illustrates how to reduce climate observation data uncertainty and use a linked science application called Provenance Environment (ProvEn) to enable and facilitate scientific teams to publish, share, link, and discover knowledge about the UQ research results. UQ results include terascale datasets that are published to an Earth Systems Grid Federation (ESGF) repository. Uncertainty exists in observation data sets, which is due to sensor data process (such as time averaging), sensor failure in extreme weather conditions, and sensor manufacture error etc. To reduce the uncertainty in the observation data sets, a method based on Principal Component Analysis (PCA) was proposed to recover the missing values in observation data. Several large principal components (PCs) of data with missing values are computed based on available values using an iterative method. The computed PCs can approximate the true PCs with high accuracy given a condition of missing values is met; the iterative method greatly improve the computational efficiency in computing PCs. Moreover, noise removal is done at the same time during the process of computing missing values by using only several large PCs. The uncertainty quantification is done through statistical analysis of the distribution of different PCs. To record above UQ process, and provide an explanation on the uncertainty before and after the UQ process on the observation data sets, additional data provenance ontology, such as ProvEn, is necessary. In this study, we demonstrate how to reduce observation data uncertainty on climate model-observation test beds and using ProvEn to record the UQ process on ESGF. ProvEn demonstrates how a scientific team conducting UQ studies can discover dataset links using its domain knowledgebase, allowing them to better understand and convey the UQ study research objectives, the experimental protocol used, the resulting dataset lineage, related analytical findings, ancillary literature citations, along with the social network of scientists associated with the study. Climate scientists will not only benefit from understanding a particular dataset within a knowledge context, but also benefit from the cross reference of knowledge among the numerous UQ studies being stored in ESGF.

  7. Numerical Error Estimation with UQ

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Korn, Peter; Marotzke, Jochem

    2014-05-01

    Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted

  8. Sequential Designs Based on Bayesian Uncertainty Quantification in Sparse Representation Surrogate Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.

    A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less

  9. Sequential Designs Based on Bayesian Uncertainty Quantification in Sparse Representation Surrogate Modeling

    DOE PAGES

    Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.

    2017-04-12

    A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less

  10. Assessment of the National Combustion Code

    NASA Technical Reports Server (NTRS)

    Liu, nan-Suey; Iannetti, Anthony; Shih, Tsan-Hsing

    2007-01-01

    The advancements made during the last decade in the areas of combustion modeling, numerical simulation, and computing platform have greatly facilitated the use of CFD based tools in the development of combustion technology. Further development of verification, validation and uncertainty quantification will have profound impact on the reliability and utility of these CFD based tools. The objectives of the present effort are to establish baseline for the National Combustion Code (NCC) and experimental data, as well as to document current capabilities and identify gaps for further improvements.

  11. Error correction in multi-fidelity molecular dynamics simulations using functional uncertainty quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reeve, Samuel Temple; Strachan, Alejandro, E-mail: strachan@purdue.edu

    We use functional, Fréchet, derivatives to quantify how thermodynamic outputs of a molecular dynamics (MD) simulation depend on the potential used to compute atomic interactions. Our approach quantifies the sensitivity of the quantities of interest with respect to the input functions as opposed to its parameters as is done in typical uncertainty quantification methods. We show that the functional sensitivity of the average potential energy and pressure in isothermal, isochoric MD simulations using Lennard–Jones two-body interactions can be used to accurately predict those properties for other interatomic potentials (with different functional forms) without re-running the simulations. This is demonstrated undermore » three different thermodynamic conditions, namely a crystal at room temperature, a liquid at ambient pressure, and a high pressure liquid. The method provides accurate predictions as long as the change in potential can be reasonably described to first order and does not significantly affect the region in phase space explored by the simulation. The functional uncertainty quantification approach can be used to estimate the uncertainties associated with constitutive models used in the simulation and to correct predictions if a more accurate representation becomes available.« less

  12. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling

    NASA Astrophysics Data System (ADS)

    Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.

    2017-05-01

    Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.

  13. A Geostatistics-Informed Hierarchical Sensitivity Analysis Method for Complex Groundwater Flow and Transport Modeling

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2017-12-01

    Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.

  14. A Python Interface for the Dakota Iterative Systems Analysis Toolkit

    NASA Astrophysics Data System (ADS)

    Piper, M.; Hutton, E.; Syvitski, J. P.

    2016-12-01

    Uncertainty quantification is required to improve the accuracy, reliability, and accountability of Earth science models. Dakota is a software toolkit, developed at Sandia National Laboratories, that provides an interface between models and a library of analysis methods, including support for sensitivity analysis, uncertainty quantification, optimization, and calibration techniques. Dakota is a powerful tool, but its learning curve is steep: the user not only must understand the structure and syntax of the Dakota input file, but also must develop intermediate code, called an analysis driver, that allows Dakota to run a model. The CSDMS Dakota interface (CDI) is a Python package that wraps and extends Dakota's user interface. It simplifies the process of configuring and running a Dakota experiment. A user can program to the CDI, allowing a Dakota experiment to be scripted. The CDI creates Dakota input files and provides a generic analysis driver. Any model written in Python that exposes a Basic Model Interface (BMI), as well as any model componentized in the CSDMS modeling framework, automatically works with the CDI. The CDI has a plugin architecture, so models written in other languages, or those that don't expose a BMI, can be accessed by the CDI by programmatically extending a template; an example is provided in the CDI distribution. Currently, six Dakota analysis methods have been implemented for examples from the much larger Dakota library. To demonstrate the CDI, we performed an uncertainty quantification experiment with the HydroTrend hydrological water balance and transport model. In the experiment, we evaluated the response of long-term suspended sediment load at the river mouth (Qs) to uncertainty in two input parameters, annual mean temperature (T) and precipitation (P), over a series of 100-year runs, using the polynomial chaos method. Through Dakota, we calculated moments, local and global (Sobol') sensitivity indices, and probability density and cumulative distribution functions for the response.

  15. Uncertainty in temperature-based determination of time of death

    NASA Astrophysics Data System (ADS)

    Weiser, Martin; Erdmann, Bodo; Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Mall, Gita; Zachow, Stefan

    2018-03-01

    Temperature-based estimation of time of death (ToD) can be performed either with the help of simple phenomenological models of corpse cooling or with detailed mechanistic (thermodynamic) heat transfer models. The latter are much more complex, but allow a higher accuracy of ToD estimation as in principle all relevant cooling mechanisms can be taken into account. The potentially higher accuracy depends on the accuracy of tissue and environmental parameters as well as on the geometric resolution. We investigate the impact of parameter variations and geometry representation on the estimated ToD. For this, numerical simulation of analytic heat transport models is performed on a highly detailed 3D corpse model, that has been segmented and geometrically reconstructed from a computed tomography (CT) data set, differentiating various organs and tissue types. From that and prior information available on thermal parameters and their variability, we identify the most crucial parameters to measure or estimate, and obtain an a priori uncertainty quantification for the ToD.

  16. Incorporating Wind Power Forecast Uncertainties Into Stochastic Unit Commitment Using Neural Network-Based Prediction Intervals.

    PubMed

    Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas

    2015-09-01

    Penetration of renewable energy resources, such as wind and solar power, into power systems significantly increases the uncertainties on system operation, stability, and reliability in smart grids. In this paper, the nonparametric neural network-based prediction intervals (PIs) are implemented for forecast uncertainty quantification. Instead of a single level PI, wind power forecast uncertainties are represented in a list of PIs. These PIs are then decomposed into quantiles of wind power. A new scenario generation method is proposed to handle wind power forecast uncertainties. For each hour, an empirical cumulative distribution function (ECDF) is fitted to these quantile points. The Monte Carlo simulation method is used to generate scenarios from the ECDF. Then the wind power scenarios are incorporated into a stochastic security-constrained unit commitment (SCUC) model. The heuristic genetic algorithm is utilized to solve the stochastic SCUC problem. Five deterministic and four stochastic case studies incorporated with interval forecasts of wind power are implemented. The results of these cases are presented and discussed together. Generation costs, and the scheduled and real-time economic dispatch reserves of different unit commitment strategies are compared. The experimental results show that the stochastic model is more robust than deterministic ones and, thus, decreases the risk in system operations of smart grids.

  17. Error and Uncertainty Quantification in the Numerical Simulation of Complex Fluid Flows

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2010-01-01

    The failure of numerical simulation to predict physical reality is often a direct consequence of the compounding effects of numerical error arising from finite-dimensional approximation and physical model uncertainty resulting from inexact knowledge and/or statistical representation. In this topical lecture, we briefly review systematic theories for quantifying numerical errors and restricted forms of model uncertainty occurring in simulations of fluid flow. A goal of this lecture is to elucidate both positive and negative aspects of applying these theories to practical fluid flow problems. Finite-element and finite-volume calculations of subsonic and hypersonic fluid flow are presented to contrast the differing roles of numerical error and model uncertainty. for these problems.

  18. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.

    PubMed

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-08-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.

  19. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks

    PubMed Central

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-01-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784

  20. On the short-term uncertainty in performance f a point absorber wave energy converter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coe, Ryan Geoffrey; Michelen, Carlos; Manuel, Lance

    2016-03-01

    Of interest, in this study, is the quantification of uncertainty in the performance of a two-body wave point absorber (Reference Model 3 or RM3), which serves as a wave energy converter (WEC). We demonstrate how simulation tools may be used to establish short-term relationships between any performance parameter of the WEC device and wave height in individual sea states. We demonstrate this methodology for two sea states. Efficient structural reliability methods, validated using more expensive Monte Carlo sampling, allow the estimation of uncertainty in performance of the device. Such methods, when combined with metocean data quantifying the likelihood of differentmore » sea states, can be useful in long-term studies and in reliability-based design.« less

  1. Nuclear data uncertainty propagation by the XSUSA method in the HELIOS2 lattice code

    NASA Astrophysics Data System (ADS)

    Wemple, Charles; Zwermann, Winfried

    2017-09-01

    Uncertainty quantification has been extensively applied to nuclear criticality analyses for many years and has recently begun to be applied to depletion calculations. However, regulatory bodies worldwide are trending toward requiring such analyses for reactor fuel cycle calculations, which also requires uncertainty propagation for isotopics and nuclear reaction rates. XSUSA is a proven methodology for cross section uncertainty propagation based on random sampling of the nuclear data according to covariance data in multi-group representation; HELIOS2 is a lattice code widely used for commercial and research reactor fuel cycle calculations. This work describes a technique to automatically propagate the nuclear data uncertainties via the XSUSA approach through fuel lattice calculations in HELIOS2. Application of the XSUSA methodology in HELIOS2 presented some unusual challenges because of the highly-processed multi-group cross section data used in commercial lattice codes. Currently, uncertainties based on the SCALE 6.1 covariance data file are being used, but the implementation can be adapted to other covariance data in multi-group structure. Pin-cell and assembly depletion calculations, based on models described in the UAM-LWR Phase I and II benchmarks, are performed and uncertainties in multiplication factor, reaction rates, isotope concentrations, and delayed-neutron data are calculated. With this extension, it will be possible for HELIOS2 users to propagate nuclear data uncertainties directly from the microscopic cross sections to subsequent core simulations.

  2. Uncertainty quantification and reliability assessment in operational oil spill forecast modeling system.

    PubMed

    Hou, Xianlong; Hodges, Ben R; Feng, Dongyu; Liu, Qixiao

    2017-03-15

    As oil transport increasing in the Texas bays, greater risks of ship collisions will become a challenge, yielding oil spill accidents as a consequence. To minimize the ecological damage and optimize rapid response, emergency managers need to be informed with how fast and where oil will spread as soon as possible after a spill. The state-of-the-art operational oil spill forecast modeling system improves the oil spill response into a new stage. However uncertainty due to predicted data inputs often elicits compromise on the reliability of the forecast result, leading to misdirection in contingency planning. Thus understanding the forecast uncertainty and reliability become significant. In this paper, Monte Carlo simulation is implemented to provide parameters to generate forecast probability maps. The oil spill forecast uncertainty is thus quantified by comparing the forecast probability map and the associated hindcast simulation. A HyosPy-based simple statistic model is developed to assess the reliability of an oil spill forecast in term of belief degree. The technologies developed in this study create a prototype for uncertainty and reliability analysis in numerical oil spill forecast modeling system, providing emergency managers to improve the capability of real time operational oil spill response and impact assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Molecular nonlinear dynamics and protein thermal uncertainty quantification

    PubMed Central

    Xia, Kelin; Wei, Guo-Wei

    2014-01-01

    This work introduces molecular nonlinear dynamics (MND) as a new approach for describing protein folding and aggregation. By using a mode system, we show that the MND of disordered proteins is chaotic while that of folded proteins exhibits intrinsically low dimensional manifolds (ILDMs). The stability of ILDMs is found to strongly correlate with protein energies. We propose a novel method for protein thermal uncertainty quantification based on persistently invariant ILDMs. Extensive comparison with experimental data and the state-of-the-art methods in the field validate the proposed new method for protein B-factor prediction. PMID:24697365

  4. Estimating uncertainties in complex joint inverse problems

    NASA Astrophysics Data System (ADS)

    Afonso, Juan Carlos

    2016-04-01

    Sources of uncertainty affecting geophysical inversions can be classified either as reflective (i.e. the practitioner is aware of her/his ignorance) or non-reflective (i.e. the practitioner does not know that she/he does not know!). Although we should be always conscious of the latter, the former are the ones that, in principle, can be estimated either empirically (by making measurements or collecting data) or subjectively (based on the experience of the researchers). For complex parameter estimation problems in geophysics, subjective estimation of uncertainty is the most common type. In this context, probabilistic (aka Bayesian) methods are commonly claimed to offer a natural and realistic platform from which to estimate model uncertainties. This is because in the Bayesian approach, errors (whatever their nature) can be naturally included as part of the global statistical model, the solution of which represents the actual solution to the inverse problem. However, although we agree that probabilistic inversion methods are the most powerful tool for uncertainty estimation, the common claim that they produce "realistic" or "representative" uncertainties is not always justified. Typically, ALL UNCERTAINTY ESTIMATES ARE MODEL DEPENDENT, and therefore, besides a thorough characterization of experimental uncertainties, particular care must be paid to the uncertainty arising from model errors and input uncertainties. We recall here two quotes by G. Box and M. Gunzburger, respectively, of special significance for inversion practitioners and for this session: "…all models are wrong, but some are useful" and "computational results are believed by no one, except the person who wrote the code". In this presentation I will discuss and present examples of some problems associated with the estimation and quantification of uncertainties in complex multi-observable probabilistic inversions, and how to address them. Although the emphasis will be on sources of uncertainty related to the forward and statistical models, I will also address other uncertainties associated with data and uncertainty propagation.

  5. Uncertainty Quantification and Sensitivity Analysis in the CICE v5.1 Sea Ice Model

    NASA Astrophysics Data System (ADS)

    Urrego-Blanco, J. R.; Urban, N. M.

    2015-12-01

    Changes in the high latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with mid latitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. In this work we characterize parametric uncertainty in Los Alamos Sea Ice model (CICE) and quantify the sensitivity of sea ice area, extent and volume with respect to uncertainty in about 40 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one-at-a-time, this study uses a global variance-based approach in which Sobol sequences are used to efficiently sample the full 40-dimensional parameter space. This approach requires a very large number of model evaluations, which are expensive to run. A more computationally efficient approach is implemented by training and cross-validating a surrogate (emulator) of the sea ice model with model output from 400 model runs. The emulator is used to make predictions of sea ice extent, area, and volume at several model configurations, which are then used to compute the Sobol sensitivity indices of the 40 parameters. A ranking based on the sensitivity indices indicates that model output is most sensitive to snow parameters such as conductivity and grain size, and the drainage of melt ponds. The main effects and interactions among the most influential parameters are also estimated by a non-parametric regression technique based on generalized additive models. It is recommended research to be prioritized towards more accurately determining these most influential parameters values by observational studies or by improving existing parameterizations in the sea ice model.

  6. Stochastic output error vibration-based damage detection and assessment in structures under earthquake excitation

    NASA Astrophysics Data System (ADS)

    Sakellariou, J. S.; Fassois, S. D.

    2006-11-01

    A stochastic output error (OE) vibration-based methodology for damage detection and assessment (localization and quantification) in structures under earthquake excitation is introduced. The methodology is intended for assessing the state of a structure following potential damage occurrence by exploiting vibration signal measurements produced by low-level earthquake excitations. It is based upon (a) stochastic OE model identification, (b) statistical hypothesis testing procedures for damage detection, and (c) a geometric method (GM) for damage assessment. The methodology's advantages include the effective use of the non-stationary and limited duration earthquake excitation, the handling of stochastic uncertainties, the tackling of the damage localization and quantification subproblems, the use of "small" size, simple and partial (in both the spatial and frequency bandwidth senses) identified OE-type models, and the use of a minimal number of measured vibration signals. Its feasibility and effectiveness are assessed via Monte Carlo experiments employing a simple simulation model of a 6 storey building. It is demonstrated that damage levels of 5% and 20% reduction in a storey's stiffness characteristics may be properly detected and assessed using noise-corrupted vibration signals.

  7. Improving Forecasts Through Realistic Uncertainty Estimates: A Novel Data Driven Method for Model Uncertainty Quantification in Data Assimilation

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.

    2016-12-01

    Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.

  8. Multiscale Characterization and Quantification of Arsenic Mobilization and Attenuation During Injection of Treated Coal Seam Gas Coproduced Water into Deep Aquifers

    NASA Astrophysics Data System (ADS)

    Rathi, Bhasker; Siade, Adam J.; Donn, Michael J.; Helm, Lauren; Morris, Ryan; Davis, James A.; Berg, Michael; Prommer, Henning

    2017-12-01

    Coal seam gas production involves generation and management of large amounts of co-produced water. One of the most suitable methods of management is injection into deep aquifers. Field injection trials may be used to support the predictions of anticipated hydrological and geochemical impacts of injection. The present work employs reactive transport modeling (RTM) for a comprehensive analysis of data collected from a trial where arsenic mobilization was observed. Arsenic sorption behavior was studied through laboratory experiments, accompanied by the development of a surface complexation model (SCM). A field-scale RTM that incorporated the laboratory-derived SCM was used to simulate the data collected during the field injection trial and then to predict the long-term fate of arsenic. We propose a new practical procedure which integrates laboratory and field-scale models using a Monte Carlo type uncertainty analysis and alleviates a significant proportion of the computational effort required for predictive uncertainty quantification. The results illustrate that both arsenic desorption under alkaline conditions and pyrite oxidation have likely contributed to the arsenic mobilization that was observed during the field trial. The predictive simulations show that arsenic concentrations would likely remain very low if the potential for pyrite oxidation is minimized through complete deoxygenation of the injectant. The proposed modeling and predictive uncertainty quantification method can be implemented for a wide range of groundwater studies that investigate the risks of metal(loid) or radionuclide contamination.

  9. Determination of boron in uranium aluminum silicon alloy by spectrophotometry and estimation of expanded uncertainty in measurement

    NASA Astrophysics Data System (ADS)

    Ramanjaneyulu, P. S.; Sayi, Y. S.; Ramakumar, K. L.

    2008-08-01

    Quantification of boron in diverse materials of relevance in nuclear technology is essential in view of its high thermal neutron absorption cross section. A simple and sensitive method has been developed for the determination of boron in uranium-aluminum-silicon alloy, based on leaching of boron with 6 M HCl and H 2O 2, its selective separation by solvent extraction with 2-ethyl hexane 1,3-diol and quantification by spectrophotometry using curcumin. The method has been evaluated by standard addition method and validated by inductively coupled plasma-atomic emission spectroscopy. Relative standard deviation and absolute detection limit of the method are 3.0% (at 1 σ level) and 12 ng, respectively. All possible sources of uncertainties in the methodology have been individually assessed, following the International Organization for Standardization guidelines. The combined uncertainty is calculated employing uncertainty propagation formulae. The expanded uncertainty in the measurement at 95% confidence level (coverage factor 2) is 8.840%.

  10. The impact of personalized probabilistic wall thickness models on peak wall stress in abdominal aortic aneurysms.

    PubMed

    Biehler, J; Wall, W A

    2018-02-01

    If computational models are ever to be used in high-stakes decision making in clinical practice, the use of personalized models and predictive simulation techniques is a must. This entails rigorous quantification of uncertainties as well as harnessing available patient-specific data to the greatest extent possible. Although researchers are beginning to realize that taking uncertainty in model input parameters into account is a necessity, the predominantly used probabilistic description for these uncertain parameters is based on elementary random variable models. In this work, we set out for a comparison of different probabilistic models for uncertain input parameters using the example of an uncertain wall thickness in finite element models of abdominal aortic aneurysms. We provide the first comparison between a random variable and a random field model for the aortic wall and investigate the impact on the probability distribution of the computed peak wall stress. Moreover, we show that the uncertainty about the prevailing peak wall stress can be reduced if noninvasively available, patient-specific data are harnessed for the construction of the probabilistic wall thickness model. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Uncertainty quantification of resonant ultrasound spectroscopy for material property and single crystal orientation estimation on a complex part

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Mayes, Alexander; Jauriqui, Leanne; Biedermann, Eric; Heffernan, Julieanne; Livings, Richard; Goodlet, Brent; Mazdiyasni, Siamack

    2018-04-01

    A case study is presented evaluating uncertainty in Resonance Ultrasound Spectroscopy (RUS) inversion for a single crystal (SX) Ni-based superalloy Mar-M247 cylindrical dog-bone specimens. A number of surrogate models were developed with FEM model solutions, using different sampling schemes (regular grid, Monte Carlo sampling, Latin Hyper-cube sampling) and model approaches, N-dimensional cubic spline interpolation and Kriging. Repeated studies were used to quantify the well-posedness of the inversion problem, and the uncertainty was assessed in material property and crystallographic orientation estimates given typical geometric dimension variability in aerospace components. Surrogate model quality was found to be an important factor in inversion results when the model more closely represents the test data. One important discovery was when the model matches well with test data, a Kriging surrogate model using un-sorted Latin Hypercube sampled data performed as well as the best results from an N-dimensional interpolation model using sorted data. However, both surrogate model quality and mode sorting were found to be less critical when inverting properties from either experimental data or simulated test cases with uncontrolled geometric variation.

  12. Uncertainty quantification in nanomechanical measurements using the atomic force microscope

    Treesearch

    Ryan Wagner; Robert Moon; Jon Pratt; Gordon Shaw; Arvind Raman

    2011-01-01

    Quantifying uncertainty in measured properties of nanomaterials is a prerequisite for the manufacture of reliable nanoengineered materials and products. Yet, rigorous uncertainty quantification (UQ) is rarely applied for material property measurements with the atomic force microscope (AFM), a widely used instrument that can measure properties at nanometer scale...

  13. Quantifying structural uncertainty on fault networks using a marked point process within a Bayesian framework

    NASA Astrophysics Data System (ADS)

    Aydin, Orhun; Caers, Jef Karel

    2017-08-01

    Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed methodology generates realistic fault network models conditioned to data and a conceptual model of the underlying tectonics.

  14. Uncertainty Quantification in Geomagnetic Field Modeling

    NASA Astrophysics Data System (ADS)

    Chulliat, A.; Nair, M. C.; Alken, P.; Meyer, B.; Saltus, R.; Woods, A.

    2017-12-01

    Geomagnetic field models are mathematical descriptions of the various sources of the Earth's magnetic field, and are generally obtained by solving an inverse problem. They are widely used in research to separate and characterize field sources, but also in many practical applications such as aircraft and ship navigation, smartphone orientation, satellite attitude control, and directional drilling. In recent years, more sophisticated models have been developed, thanks to the continuous availability of high quality satellite data and to progress in modeling techniques. Uncertainty quantification has become an integral part of model development, both to assess the progress made and to address specific users' needs. Here we report on recent advances made by our group in quantifying the uncertainty of geomagnetic field models. We first focus on NOAA's World Magnetic Model (WMM) and the International Geomagnetic Reference Field (IGRF), two reference models of the main (core) magnetic field produced every five years. We describe the methods used in quantifying the model commission error as well as the omission error attributed to various un-modeled sources such as magnetized rocks in the crust and electric current systems in the atmosphere and near-Earth environment. A simple error model was derived from this analysis, to facilitate usage in practical applications. We next report on improvements brought by combining a main field model with a high resolution crustal field model and a time-varying, real-time external field model, like in NOAA's High Definition Geomagnetic Model (HDGM). The obtained uncertainties are used by the directional drilling industry to mitigate health, safety and environment risks.

  15. Probabilistic risk assessment for a loss of coolant accident in McMaster Nuclear Reactor and application of reliability physics model for modeling human reliability

    NASA Astrophysics Data System (ADS)

    Ha, Taesung

    A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.

  16. Uncertainty quantification of surface-water/groundwater exchange estimates in large wetland systems using Python

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; Metz, P. A.

    2014-12-01

    Most watershed studies include observation-based water budget analyses to develop first-order estimates of significant flow terms. Surface-water/groundwater (SWGW) exchange is typically assumed to be equal to the residual of the sum of inflows and outflows in a watershed. These estimates of SWGW exchange, however, are highly uncertain as a result of the propagation of uncertainty inherent in the calculation or processing of the other terms of the water budget, such as stage-area-volume relations, and uncertainties associated with land-cover based evapotranspiration (ET) rate estimates. Furthermore, the uncertainty of estimated SWGW exchanges can be magnified in large wetland systems that transition from dry to wet during wet periods. Although it is well understood that observation-based estimates of SWGW exchange are uncertain it is uncommon for the uncertainty of these estimates to be directly quantified. High-level programming languages like Python can greatly reduce the effort required to (1) quantify the uncertainty of estimated SWGW exchange in large wetland systems and (2) evaluate how different approaches for partitioning land-cover data in a watershed may affect the water-budget uncertainty. We have used Python with the Numpy, Scipy.stats, and pyDOE packages to implement an unconstrained Monte Carlo approach with Latin Hypercube sampling to quantify the uncertainty of monthly estimates of SWGW exchange in the Floral City watershed of the Tsala Apopka wetland system in west-central Florida, USA. Possible sources of uncertainty in the water budget analysis include rainfall, ET, canal discharge, and land/bathymetric surface elevations. Each of these input variables was assigned a probability distribution based on observation error or spanning the range of probable values. The Monte Carlo integration process exposes the uncertainties in land-cover based ET rate estimates as the dominant contributor to the uncertainty in SWGW exchange estimates. We will discuss the uncertainty of SWGW exchange estimates using an ET model that partitions the watershed into open water and wetland land-cover types. We will also discuss the uncertainty of SWGW exchange estimates calculated using ET models partitioned into additional land-cover types.

  17. Uncertainty Quantification applied to flow simulations in thoracic aortic aneurysms

    NASA Astrophysics Data System (ADS)

    Boccadifuoco, Alessandro; Mariotti, Alessandro; Celi, Simona; Martini, Nicola; Salvetti, Maria Vittoria

    2015-11-01

    The thoracic aortic aneurysm is a progressive dilatation of the thoracic aorta causing a weakness in the aortic wall, which may eventually cause life-threatening events. Clinical decisions on treatment strategies are currently based on empiric criteria, like the aortic diameter value or its growth rate. Numerical simulations can give the quantification of important indexes which are impossible to be obtained through in-vivo measurements and can provide supplementary information. Hemodynamic simulations are carried out by using the open-source tool SimVascular and considering patient-specific geometries. One of the main issues in these simulations is the choice of suitable boundary conditions, modeling the organs and vessels not included in the computational domain. The current practice is to use outflow conditions based on resistance and capacitance, whose values are tuned to obtain a physiological behavior of the patient pressure. However it is not known a priori how this choice affects the results of the simulation. The impact of the uncertainties in these outflow parameters is investigated here by using the generalized Polynomial Chaos approach. This analysis also permits to calibrate the outflow-boundary parameters when patient-specific in-vivo data are available.

  18. System Identification and Uncertainty Quantification Using Orthogonal Excitations and the Semi-span Super Sonic Transport (S4T) Model

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Wieseman, Carol D.

    2012-01-01

    Orthogonal harmonic multisine excitations were utilized in a wind tunnel test and in simulation of the SemiSpan Supersonic Transport model to assess aeroservoelastic characteristics. Fundamental issues associated with analyzing sinusoidal signals were examined, including spectral leakage, excitation truncation, and uncertainties on frequency response functions and mean-square coherence. Simulation allowed for evaluation of these issues relative to a truth model, while wind tunnel data introduced real-world implementation issues.

  19. Uncertainty quantification in flood risk assessment

    NASA Astrophysics Data System (ADS)

    Blöschl, Günter; Hall, Julia; Kiss, Andrea; Parajka, Juraj; Perdigão, Rui A. P.; Rogger, Magdalena; Salinas, José Luis; Viglione, Alberto

    2017-04-01

    Uncertainty is inherent to flood risk assessments because of the complexity of the human-water system, which is characterised by nonlinearities and interdependencies, because of limited knowledge about system properties and because of cognitive biases in human perception and decision-making. On top of the uncertainty associated with the assessment of the existing risk to extreme events, additional uncertainty arises because of temporal changes in the system due to climate change, modifications of the environment, population growth and the associated increase in assets. Novel risk assessment concepts are needed that take into account all these sources of uncertainty. They should be based on the understanding of how flood extremes are generated and how they change over time. They should also account for the dynamics of risk perception of decision makers and population in the floodplains. In this talk we discuss these novel risk assessment concepts through examples from Flood Frequency Hydrology, Socio-Hydrology and Predictions Under Change. We believe that uncertainty quantification in flood risk assessment should lead to a robust approach of integrated flood risk management aiming at enhancing resilience rather than searching for optimal defense strategies.

  20. DESIGN CHARACTERISTICS OF THE IDAHO NATIONAL LABORATORY HIGH-TEMPERATURE GAS-COOLED TEST REACTOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sterbentz, James; Bayless, Paul; Strydom, Gerhard

    2016-11-01

    Uncertainty and sensitivity analysis is an indispensable element of any substantial attempt in reactor simulation validation. The quantification of uncertainties in nuclear engineering has grown more important and the IAEA Coordinated Research Program (CRP) on High-Temperature Gas Cooled Reactor (HTGR) initiated in 2012 aims to investigate the various uncertainty quantification methodologies for this type of reactors. The first phase of the CRP is dedicated to the estimation of cell and lattice model uncertainties due to the neutron cross sections co-variances. Phase II is oriented towards the investigation of propagated uncertainties from the lattice to the coupled neutronics/thermal hydraulics core calculations.more » Nominal results for the prismatic single block (Ex.I-2a) and super cell models (Ex.I-2c) have been obtained using the SCALE 6.1.3 two-dimensional lattice code NEWT coupled to the TRITON sequence for cross section generation. In this work, the TRITON/NEWT-flux-weighted cross sections obtained for Ex.I-2a and various models of Ex.I-2c is utilized to perform a sensitivity analysis of the MHTGR-350 core power densities and eigenvalues. The core solutions are obtained with the INL coupled code PHISICS/RELAP5-3D, utilizing a fixed-temperature feedback for Ex. II-1a.. It is observed that the core power density does not vary significantly in shape, but the magnitude of these variations increases as the moderator-to-fuel ratio increases in the super cell lattice models.« less

  1. The NETL MFiX Suite of multiphase flow models: A brief review and recent applications of MFiX-TFM to fossil energy technologies

    DOE PAGES

    Li, Tingwen; Rogers, William A.; Syamlal, Madhava; ...

    2016-07-29

    Here, the MFiX suite of multiphase computational fluid dynamics (CFD) codes is being developed at U.S. Department of Energy's National Energy Technology Laboratory (NETL). It includes several different approaches to multiphase simulation: MFiX-TFM, a two-fluid (Eulerian–Eulerian) model; MFiX-DEM, an Eulerian fluid model with a Lagrangian Discrete Element Model for the solids phase; and MFiX-PIC, Eulerian fluid model with Lagrangian particle ‘parcels’ representing particle groups. These models are undergoing continuous development and application, with verification, validation, and uncertainty quantification (VV&UQ) as integrated activities. After a brief summary of recent progress in the verification, validation and uncertainty quantification (VV&UQ), this article highlightsmore » two recent accomplishments in the application of MFiX-TFM to fossil energy technology development. First, recent application of MFiX to the pilot-scale KBR TRIG™ Transport Gasifier located at DOE's National Carbon Capture Center (NCCC) is described. Gasifier performance over a range of operating conditions was modeled and compared to NCCC operational data to validate the ability of the model to predict parametric behavior. Second, comparison of code predictions at a detailed fundamental scale is presented studying solid sorbents for the post-combustion capture of CO 2 from flue gas. Specifically designed NETL experiments are being used to validate hydrodynamics and chemical kinetics for the sorbent-based carbon capture process.« less

  2. Quantification of CO2 generation in sedimentary basins through carbonate/clays reactions with uncertain thermodynamic parameters

    NASA Astrophysics Data System (ADS)

    Ceriotti, G.; Porta, G. M.; Geloni, C.; Dalla Rosa, M.; Guadagnini, A.

    2017-09-01

    We develop a methodological framework and mathematical formulation which yields estimates of the uncertainty associated with the amounts of CO2 generated by Carbonate-Clays Reactions (CCR) in large-scale subsurface systems to assist characterization of the main features of this geochemical process. Our approach couples a one-dimensional compaction model, providing the dynamics of the evolution of porosity, temperature and pressure along the vertical direction, with a chemical model able to quantify the partial pressure of CO2 resulting from minerals and pore water interaction. The modeling framework we propose allows (i) estimating the depth at which the source of gases is located and (ii) quantifying the amount of CO2 generated, based on the mineralogy of the sediments involved in the basin formation process. A distinctive objective of the study is the quantification of the way the uncertainty affecting chemical equilibrium constants propagates to model outputs, i.e., the flux of CO2. These parameters are considered as key sources of uncertainty in our modeling approach because temperature and pressure distributions associated with deep burial depths typically fall outside the range of validity of commonly employed geochemical databases and typically used geochemical software. We also analyze the impact of the relative abundancy of primary phases in the sediments on the activation of CCR processes. As a test bed, we consider a computational study where pressure and temperature conditions are representative of those observed in real sedimentary formation. Our results are conducive to the probabilistic assessment of (i) the characteristic pressure and temperature at which CCR leads to generation of CO2 in sedimentary systems, (ii) the order of magnitude of the CO2 generation rate that can be associated with CCR processes.

  3. Risk assessment of climate systems for national security.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backus, George A.; Boslough, Mark Bruce Elrick; Brown, Theresa Jean

    2012-10-01

    Climate change, through drought, flooding, storms, heat waves, and melting Arctic ice, affects the production and flow of resource within and among geographical regions. The interactions among governments, populations, and sectors of the economy require integrated assessment based on risk, through uncertainty quantification (UQ). This project evaluated the capabilities with Sandia National Laboratories to perform such integrated analyses, as they relate to (inter)national security. The combining of the UQ results from climate models with hydrological and economic/infrastructure impact modeling appears to offer the best capability for national security risk assessments.

  4. Uncertainty quantification and propagation in dynamic models using ambient vibration measurements, application to a 10-story building

    NASA Astrophysics Data System (ADS)

    Behmanesh, Iman; Yousefianmoghadam, Seyedsina; Nozari, Amin; Moaveni, Babak; Stavridis, Andreas

    2018-07-01

    This paper investigates the application of Hierarchical Bayesian model updating for uncertainty quantification and response prediction of civil structures. In this updating framework, structural parameters of an initial finite element (FE) model (e.g., stiffness or mass) are calibrated by minimizing error functions between the identified modal parameters and the corresponding parameters of the model. These error functions are assumed to have Gaussian probability distributions with unknown parameters to be determined. The estimated parameters of error functions represent the uncertainty of the calibrated model in predicting building's response (modal parameters here). The focus of this paper is to answer whether the quantified model uncertainties using dynamic measurement at building's reference/calibration state can be used to improve the model prediction accuracies at a different structural state, e.g., damaged structure. Also, the effects of prediction error bias on the uncertainty of the predicted values is studied. The test structure considered here is a ten-story concrete building located in Utica, NY. The modal parameters of the building at its reference state are identified from ambient vibration data and used to calibrate parameters of the initial FE model as well as the error functions. Before demolishing the building, six of its exterior walls were removed and ambient vibration measurements were also collected from the structure after the wall removal. These data are not used to calibrate the model; they are only used to assess the predicted results. The model updating framework proposed in this paper is applied to estimate the modal parameters of the building at its reference state as well as two damaged states: moderate damage (removal of four walls) and severe damage (removal of six walls). Good agreement is observed between the model-predicted modal parameters and those identified from vibration tests. Moreover, it is shown that including prediction error bias in the updating process instead of commonly-used zero-mean error function can significantly reduce the prediction uncertainties.

  5. Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN

    NASA Astrophysics Data System (ADS)

    Talbot, Paul W.

    As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.

  6. Quantification of uncertainty associated with United States high resolution fossil fuel CO2 emissions: updates, challenges and future plans

    NASA Astrophysics Data System (ADS)

    Gurney, K. R.; Chandrasekaran, V.; Mendoza, D. L.; Geethakumar, S.

    2010-12-01

    The Vulcan Project has estimated United States fossil fuel CO2 emissions at the hourly time scale and at spatial scales below the county level for the year 2002. Vulcan is built from a wide variety of observational data streams including regulated air pollutant emissions reporting, traffic monitoring, energy statistics, and US census data. In addition to these data sets, Vulcan relies on a series of modeling assumptions and constructs to interpolate in space, time and transform non-CO2 reporting into an estimate of CO2 combustion emissions. The recent version 2.0 of the Vulcan inventory has produced advances in a number of categories with particular emphasis on improved temporal structure. Onroad transportation emissions now avail of roughly 5000 automated traffic count monitors allowing for much improved diurnal and weekly time structure in our onroad transportation emissions. Though the inventory shows excellent agreement with independent national-level CO2 emissions estimates, uncertainty quantification has been a challenging task given the large number of data sources and numerous modeling assumptions. However, we have now accomplished a complete uncertainty estimate across all the Vulcan economic sectors and will present uncertainty estimates as a function of space, time, sector and fuel. We find that, like the underlying distribution of CO2 emissions themselves, the uncertainty is also strongly lognormal with high uncertainty associated with a relatively small number of locations. These locations typically are locations reliant upon coal combustion as the dominant CO2 source. We will also compare and contrast Vulcan fossil fuel CO2 emissions estimates against estimates built from DOE fuel-based surveys at the state level. We conclude that much of the difference between the Vulcan inventory and DOE statistics are not due to biased estimation but mechanistic differences in supply versus demand and combustion in space/time.

  7. Performance Metrics, Error Modeling, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling

    2016-01-01

    A common set of statistical metrics has been used to summarize the performance of models or measurements-­ the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying un­certainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling meth­odology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.

  8. Assessing uncertainty and sensitivity of model parameterizations and parameters in WRF affecting simulated surface fluxes and land-atmosphere coupling over the Amazon region

    NASA Astrophysics Data System (ADS)

    Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.

    2016-12-01

    This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.

  9. Toward best practice framing of uncertainty in scientific publications: A review of Water Resources Research abstracts

    NASA Astrophysics Data System (ADS)

    Guillaume, Joseph H. A.; Helgeson, Casey; Elsawah, Sondoss; Jakeman, Anthony J.; Kummu, Matti

    2017-08-01

    Uncertainty is recognized as a key issue in water resources research, among other sciences. Discussions of uncertainty typically focus on tools and techniques applied within an analysis, e.g., uncertainty quantification and model validation. But uncertainty is also addressed outside the analysis, in writing scientific publications. The language that authors use conveys their perspective of the role of uncertainty when interpreting a claim—what we call here "framing" the uncertainty. This article promotes awareness of uncertainty framing in four ways. (1) It proposes a typology of eighteen uncertainty frames, addressing five questions about uncertainty. (2) It describes the context in which uncertainty framing occurs. This is an interdisciplinary topic, involving philosophy of science, science studies, linguistics, rhetoric, and argumentation. (3) We analyze the use of uncertainty frames in a sample of 177 abstracts from the Water Resources Research journal in 2015. This helped develop and tentatively verify the typology, and provides a snapshot of current practice. (4) We make provocative recommendations to achieve a more influential, dynamic science. Current practice in uncertainty framing might be described as carefully considered incremental science. In addition to uncertainty quantification and degree of belief (present in ˜5% of abstracts), uncertainty is addressed by a combination of limiting scope, deferring to further work (˜25%) and indicating evidence is sufficient (˜40%)—or uncertainty is completely ignored (˜8%). There is a need for public debate within our discipline to decide in what context different uncertainty frames are appropriate. Uncertainty framing cannot remain a hidden practice evaluated only by lone reviewers.

  10. Modelling the dynamics and hazards of explosive eruptions: Where we are now, and confronting the next challenges (Sergey Soloviev Medal Lecture)

    NASA Astrophysics Data System (ADS)

    Neri, Augusto

    2017-04-01

    Understanding of explosive eruption dynamics and assessment of their hazards continue to represent challenging issues to the present-day volcanology community. This is largely due to the complex and diverse nature of the phenomena, and the variability and unpredictability of volcanic processes. Nevertheless, important and continuing progress has been made in the last few decades in understanding fundamental processes and in forecasting the occurrences of these phenomena, thanks to significant advances in field, experimental and theoretical modeling investigations. For over four decades, for example, volcanologists have made major progress in the description of the nature of explosive eruptions, considerably aided by the development, improvement, and application of physical-mathematical models. Integral steady-state homogeneous flow models were first used to investigate the different controlling mechanisms and to infer the genesis and evolution of the phenomena. Through continuous improvements and quantum-leap developments, a variety of transient, 3D, multiphase flow models of volcanic phenomena now can implement state-of-the-art formulations of the underlying physics, new-generation analytical and experimental data, as well as high-performance computational techniques. These numerical models have proved to be able to provide key insights in the understanding of the dynamics of explosive eruptions (e.g. convective plumes, collapsing columns, pyroclastic density currents, short-lived explosions, etc.), as well as to represent a valuable tool in the quantification of potential eruptive scenarios and associated hazards. Simplified models based on a reduction of the system complexity have been also proved useful, combined with Monte Carlo and statistical methods, to generate quantitative probabilistic hazard maps at different space and time scales, some including the quantification of important sources of uncertainty. Nevertheless, the development of physical models able to accurately replicate, within acceptable statistical uncertainty, the evolution of explosive eruptions remains a challenging goal still to be achieved. Testing of the developed models versus large-scale experimental data and well-measured real events, real-time assimilation of observational data to forecast the process nature and evolution, as well as the quantification of the uncertainties affecting our system and modelling representations appear key next steps to further progress volcanological research and its essential contribution to the mitigation of volcanic risk.

  11. The Source Inversion Validation (SIV) Initiative: A Collaborative Study on Uncertainty Quantification in Earthquake Source Inversions

    NASA Astrophysics Data System (ADS)

    Mai, P. M.; Schorlemmer, D.; Page, M.

    2012-04-01

    Earthquake source inversions image the spatio-temporal rupture evolution on one or more fault planes using seismic and/or geodetic data. Such studies are critically important for earthquake seismology in general, and for advancing seismic hazard analysis in particular, as they reveal earthquake source complexity and help (i) to investigate earthquake mechanics; (ii) to develop spontaneous dynamic rupture models; (iii) to build models for generating rupture realizations for ground-motion simulations. In applications (i - iii), the underlying finite-fault source models are regarded as "data" (input information), but their uncertainties are essentially unknown. After all, source models are obtained from solving an inherently ill-posed inverse problem to which many a priori assumptions and uncertain observations are applied. The Source Inversion Validation (SIV) project is a collaborative effort to better understand the variability between rupture models for a single earthquake (as manifested in the finite-source rupture model database) and to develop robust uncertainty quantification for earthquake source inversions. The SIV project highlights the need to develop a long-standing and rigorous testing platform to examine the current state-of-the-art in earthquake source inversion, and to develop and test novel source inversion approaches. We will review the current status of the SIV project, and report the findings and conclusions of the recent workshops. We will briefly discuss several source-inversion methods, how they treat uncertainties in data, and assess the posterior model uncertainty. Case studies include initial forward-modeling tests on Green's function calculations, and inversion results for synthetic data from spontaneous dynamic crack-like strike-slip earthquake on steeply dipping fault, embedded in a layered crustal velocity-density structure.

  12. Impact of Hydrogeological Uncertainty on Estimation of Environmental Risks Posed by Hydrocarbon Transportation Networks

    NASA Astrophysics Data System (ADS)

    Ciriello, V.; Lauriola, I.; Bonvicini, S.; Cozzani, V.; Di Federico, V.; Tartakovsky, Daniel M.

    2017-11-01

    Ubiquitous hydrogeological uncertainty undermines the veracity of quantitative predictions of soil and groundwater contamination due to accidental hydrocarbon spills from onshore pipelines. Such predictions, therefore, must be accompanied by quantification of predictive uncertainty, especially when they are used for environmental risk assessment. We quantify the impact of parametric uncertainty on quantitative forecasting of temporal evolution of two key risk indices, volumes of unsaturated and saturated soil contaminated by a surface spill of light nonaqueous-phase liquids. This is accomplished by treating the relevant uncertain parameters as random variables and deploying two alternative probabilistic models to estimate their effect on predictive uncertainty. A physics-based model is solved with a stochastic collocation method and is supplemented by a global sensitivity analysis. A second model represents the quantities of interest as polynomials of random inputs and has a virtually negligible computational cost, which enables one to explore any number of risk-related contamination scenarios. For a typical oil-spill scenario, our method can be used to identify key flow and transport parameters affecting the risk indices, to elucidate texture-dependent behavior of different soils, and to evaluate, with a degree of confidence specified by the decision-maker, the extent of contamination and the correspondent remediation costs.

  13. Global sensitivity analysis of multiscale properties of porous materials

    NASA Astrophysics Data System (ADS)

    Um, Kimoon; Zhang, Xuan; Katsoulakis, Markos; Plechac, Petr; Tartakovsky, Daniel M.

    2018-02-01

    Ubiquitous uncertainty about pore geometry inevitably undermines the veracity of pore- and multi-scale simulations of transport phenomena in porous media. It raises two fundamental issues: sensitivity of effective material properties to pore-scale parameters and statistical parameterization of Darcy-scale models that accounts for pore-scale uncertainty. Homogenization-based maps of pore-scale parameters onto their Darcy-scale counterparts facilitate both sensitivity analysis (SA) and uncertainty quantification. We treat uncertain geometric characteristics of a hierarchical porous medium as random variables to conduct global SA and to derive probabilistic descriptors of effective diffusion coefficients and effective sorption rate. Our analysis is formulated in terms of solute transport diffusing through a fluid-filled pore space, while sorbing to the solid matrix. Yet it is sufficiently general to be applied to other multiscale porous media phenomena that are amenable to homogenization.

  14. ASME V\\&V challenge problem: Surrogate-based V&V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beghini, Lauren L.; Hough, Patricia D.

    2015-12-18

    The process of verification and validation can be resource intensive. From the computational model perspective, the resource demand typically arises from long simulation run times on multiple cores coupled with the need to characterize and propagate uncertainties. In addition, predictive computations performed for safety and reliability analyses have similar resource requirements. For this reason, there is a tradeoff between the time required to complete the requisite studies and the fidelity or accuracy of the results that can be obtained. At a high level, our approach is cast within a validation hierarchy that provides a framework in which we perform sensitivitymore » analysis, model calibration, model validation, and prediction. The evidence gathered as part of these activities is mapped into the Predictive Capability Maturity Model to assess credibility of the model used for the reliability predictions. With regard to specific technical aspects of our analysis, we employ surrogate-based methods, primarily based on polynomial chaos expansions and Gaussian processes, for model calibration, sensitivity analysis, and uncertainty quantification in order to reduce the number of simulations that must be done. The goal is to tip the tradeoff balance to improving accuracy without increasing the computational demands.« less

  15. Calibration and Forward Uncertainty Propagation for Large-eddy Simulations of Engineering Flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Templeton, Jeremy Alan; Blaylock, Myra L.; Domino, Stefan P.

    2015-09-01

    The objective of this work is to investigate the efficacy of using calibration strategies from Uncertainty Quantification (UQ) to determine model coefficients for LES. As the target methods are for engineering LES, uncertainty from numerical aspects of the model must also be quantified. 15 The ultimate goal of this research thread is to generate a cost versus accuracy curve for LES such that the cost could be minimized given an accuracy prescribed by an engineering need. Realization of this goal would enable LES to serve as a predictive simulation tool within the engineering design process.

  16. Modeling and simulation of high dimensional stochastic multiscale PDE systems at the exascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zabaras, Nicolas J.

    2016-11-08

    Predictive Modeling of multiscale and Multiphysics systems requires accurate data driven characterization of the input uncertainties, and understanding of how they propagate across scales and alter the final solution. This project develops a rigorous mathematical framework and scalable uncertainty quantification algorithms to efficiently construct realistic low dimensional input models, and surrogate low complexity systems for the analysis, design, and control of physical systems represented by multiscale stochastic PDEs. The work can be applied to many areas including physical and biological processes, from climate modeling to systems biology.

  17. Constellation Program Lessons Learned in the Quantification and Use of Aerodynamic Uncertainty

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Hemsch, Michael J.; Pinier, Jeremy T.; Bibb, Karen L.; Chan, David T.; Hanke, Jeremy L.

    2011-01-01

    The NASA Constellation Program has worked for the past five years to develop a re- placement for the current Space Transportation System. Of the elements that form the Constellation Program, only two require databases that define aerodynamic environments and their respective uncertainty: the Ares launch vehicles and the Orion crew and launch abort vehicles. Teams were established within the Ares and Orion projects to provide repre- sentative aerodynamic models including both baseline values and quantified uncertainties. A technical team was also formed within the Constellation Program to facilitate integra- tion among the project elements. This paper is a summary of the collective experience of the three teams working with the quantification and use of uncertainty in aerodynamic environments: the Ares and Orion project teams as well as the Constellation integration team. Not all of the lessons learned discussed in this paper could be applied during the course of the program, but they are included in the hope of benefiting future projects.

  18. Radiofrequency Electromagnetic Radiation and Memory Performance: Sources of Uncertainty in Epidemiological Cohort Studies.

    PubMed

    Brzozek, Christopher; Benke, Kurt K; Zeleke, Berihun M; Abramson, Michael J; Benke, Geza

    2018-03-26

    Uncertainty in experimental studies of exposure to radiation from mobile phones has in the past only been framed within the context of statistical variability. It is now becoming more apparent to researchers that epistemic or reducible uncertainties can also affect the total error in results. These uncertainties are derived from a wide range of sources including human error, such as data transcription, model structure, measurement and linguistic errors in communication. The issue of epistemic uncertainty is reviewed and interpreted in the context of the MoRPhEUS, ExPOSURE and HERMES cohort studies which investigate the effect of radiofrequency electromagnetic radiation from mobile phones on memory performance. Research into this field has found inconsistent results due to limitations from a range of epistemic sources. Potential analytic approaches are suggested based on quantification of epistemic error using Monte Carlo simulation. It is recommended that future studies investigating the relationship between radiofrequency electromagnetic radiation and memory performance pay more attention to treatment of epistemic uncertainties as well as further research into improving exposure assessment. Use of directed acyclic graphs is also encouraged to display the assumed covariate relationship.

  19. Quantifying the Uncertainty in Discharge Data Using Hydraulic Knowledge and Uncertain Gaugings

    NASA Astrophysics Data System (ADS)

    Renard, B.; Le Coz, J.; Bonnifait, L.; Branger, F.; Le Boursicaud, R.; Horner, I.; Mansanarez, V.; Lang, M.

    2014-12-01

    River discharge is a crucial variable for Hydrology: as the output variable of most hydrologic models, it is used for sensitivity analyses, model structure identification, parameter estimation, data assimilation, prediction, etc. A major difficulty stems from the fact that river discharge is not measured continuously. Instead, discharge time series used by hydrologists are usually based on simple stage-discharge relations (rating curves) calibrated using a set of direct stage-discharge measurements (gaugings). In this presentation, we present a Bayesian approach to build such hydrometric rating curves, to estimate the associated uncertainty and to propagate this uncertainty to discharge time series. The three main steps of this approach are described: (1) Hydraulic analysis: identification of the hydraulic controls that govern the stage-discharge relation, identification of the rating curve equation and specification of prior distributions for the rating curve parameters; (2) Rating curve estimation: Bayesian inference of the rating curve parameters, accounting for the individual uncertainties of available gaugings, which often differ according to the discharge measurement procedure and the flow conditions; (3) Uncertainty propagation: quantification of the uncertainty in discharge time series, accounting for both the rating curve uncertainties and the uncertainty of recorded stage values. In addition, we also discuss current research activities, including the treatment of non-univocal stage-discharge relationships (e.g. due to hydraulic hysteresis, vegetation growth, sudden change of the geometry of the section, etc.).

  20. Error and uncertainty in Raman thermal conductivity measurements

    DOE PAGES

    Thomas Edwin Beechem; Yates, Luke; Graham, Samuel

    2015-04-22

    We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materialsmore » under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.« less

  1. La coherence conceptuelle d'etudiants collegiaux en mecanique Newtonienne et en metrologie

    NASA Astrophysics Data System (ADS)

    Periard, Martin

    This thesis evaluates the coherence of the conceptual network demonstrated by college students in life and applied sciences. This evaluation was based on the analysis of Burt tables issuing from multiple choice questionnaires, on the creation and careful examination of a novel tool, the matrix of specific discrimination coefficients, which will be described in the main text, and on the qualitative analysis of actual laboratory work of students doing an experimentation. At the completion of this project, four research axis have been explored. (1) What is the conceptual coherence demonstrated in Newtonian mechanics? (2) Is the mastery of uncertainty quantification related to the development of logical thinking or to mathematical competency? (3) What is the conceptual coherence demonstrated in the quantification of experimental uncertainty? (4) What are the concrete procedures utilized by students to quantify experimental uncertainty in a semi-directed laboratory context? The main conclusions that emerged from each axis of research can be summerized as follow. (1) The most prevalent erroneous conceptions are not solidly set in a rigid conceptual network. For example, a student successful in a question about Newton's third law (the most difficult subject of the Force Concept Inventory) is just slightly more likely to succeed in another related question than the other participants. Many pairs of questions displays a negative specific discrimination coefficient demonstrating a weak conceptual coherence in pre-test and a somewhat ameliorated conceptual coherence in post-test. (2) If a small proportion of students has demonstrated marked deficiencies in questions related with control of variable and in those related to the relationship between the graphical display of experimental data and a mathematical model, the majority of students can be considered as adequately mastering those subjects. However, almost every student demonstrated a lack of mastery of concepts underlying the quantification of experimental uncertainty and the propagation of uncertainty (heretofore referred to as metrology). No statistically significant correlation has been observed between the three main topics suggesting that they are largely independent cognitive abilities. Burt table has demonstrated a greater degree of conceptual coherence between control of variables questions than suggested by Pearson correlation coefficients. Equivalent question in the topic of metrology did not permit to demonstrate a clear conceptual coherence. (3) Analysis of a questionnaire entirely devoted to metrology has shown erroneous conceptions caused by prior learning (didactical obstacles), erroneous conceptions based on intuitive models and a lack of global comprehension of metrological concepts although some appear to be almost acquired. (4) When doing real experiments in semi-directed laboratory, students demonstrated the same difficulty identified in the questionnaire of 3) which could interpreted as corroborating previously obtained results. However, many unanticipated behaviors related to measurement were observed that could not have been anticipated solely by analyzing answers in the multiple-choice questionnaire. Interviews immediately following each semi-directed laboratory permitted the participants to detail certain aspects of their metrological methodology. Most notably, the use of repeated measurement strategies, their "spontaneous" strategies to quantify uncertainty, and their explanation of numerical estimates of reading uncertainties. Overall, uncertainty propagation algorithms were adequately employed. Many erroneous metrological conceptions seem to resist strongly to be modified by learning. Among others, assignation of the resolution of a digital scale as the uncertainty value and the lack of stacking strategies to diminish uncertainty. The conception that a numerical value cannot be more precise than the tolerance of an instrument seems firmly set. Key words. Burt tables, conceptual coherence, experimental uncertainty, laboratories, metrology, Newtonian mechanics, uncertainty propagation.

  2. The Difference between Uncertainty and Information, and Why This Matters

    NASA Astrophysics Data System (ADS)

    Nearing, G. S.

    2016-12-01

    Earth science investigation and arbitration (for decision making) is very often organized around a concept of uncertainty. It seems relatively straightforward that the purpose of our science is to reduce uncertainty about how environmental systems will react and evolve under different conditions. I propose here that approaching a science of complex systems as a process of quantifying and reducing uncertainty is a mistake, and specifically a mistake that is rooted in certain rather hisoric logical errors. Instead I propose that we should be asking questions about information. I argue here that an information-based perspective facilitates almost trivial answers to environmental science questions that are either difficult or theoretically impossible to answer when posed as questions about uncertainty. In particular, I propose that an information-centric perspective leads to: Coherent and non-subjective hypothesis tests for complex system models. Process-level diagnostics for complex systems models. Methods for building complex systems models that allow for inductive inference without the need for a priori specification of likelihood functions or ad hoc error metrics. Asymptotically correct quantification of epistemic uncertainty. To put this in slightly more basic terms, I propose that an information-theoretic philosophy of science has the potential to resolve certain important aspects of the Demarcation Problem and the Duhem-Quine Problem, and that Hydrology and other Earth Systems Sciences can immediately capitalize on this to address some of our most difficult and persistent problems.

  3. Quantification for complex assessment: uncertainty estimation in final year project thesis assessment

    NASA Astrophysics Data System (ADS)

    Kim, Ho Sung

    2013-12-01

    A quantitative method for estimating an expected uncertainty (reliability and validity) in assessment results arising from the relativity between four variables, viz examiner's expertise, examinee's expertise achieved, assessment task difficulty and examinee's performance, was developed for the complex assessment applicable to final year project thesis assessment including peer assessment. A guide map can be generated by the method for finding expected uncertainties prior to the assessment implementation with a given set of variables. It employs a scale for visualisation of expertise levels, derivation of which is based on quantified clarities of mental images for levels of the examiner's expertise and the examinee's expertise achieved. To identify the relevant expertise areas that depend on the complexity in assessment format, a graphical continuum model was developed. The continuum model consists of assessment task, assessment standards and criterion for the transition towards the complex assessment owing to the relativity between implicitness and explicitness and is capable of identifying areas of expertise required for scale development.

  4. Active subspace uncertainty quantification for a polydomain ferroelectric phase-field model

    NASA Astrophysics Data System (ADS)

    Leon, Lider S.; Smith, Ralph C.; Miles, Paul; Oates, William S.

    2018-03-01

    Quantum-informed ferroelectric phase field models capable of predicting material behavior, are necessary for facilitating the development and production of many adaptive structures and intelligent systems. Uncertainty is present in these models, given the quantum scale at which calculations take place. A necessary analysis is to determine how the uncertainty in the response can be attributed to the uncertainty in the model inputs or parameters. A second analysis is to identify active subspaces within the original parameter space, which quantify directions in which the model response varies most dominantly, thus reducing sampling effort and computational cost. In this investigation, we identify an active subspace for a poly-domain ferroelectric phase-field model. Using the active variables as our independent variables, we then construct a surrogate model and perform Bayesian inference. Once we quantify the uncertainties in the active variables, we obtain uncertainties for the original parameters via an inverse mapping. The analysis provides insight into how active subspace methodologies can be used to reduce computational power needed to perform Bayesian inference on model parameters informed by experimental or simulated data.

  5. Application of a baseflow filter for evaluating model structure suitability of the IHACRES CMD

    NASA Astrophysics Data System (ADS)

    Kim, H. S.

    2015-02-01

    The main objective of this study was to assess the predictive uncertainty from the rainfall-runoff model structure coupling a conceptual module (non-linear module) with a metric transfer function module (linear module). The methodology was primarily based on the comparison between the outputs of the rainfall-runoff model and those from an alternative model approach. An alternative model approach was used to minimise uncertainties arising from data and the model structure. A baseflow filter was adopted to better understand deficiencies in the forms of the rainfall-runoff model by avoiding the uncertainties related to data and the model structure. The predictive uncertainty from the model structure was investigated for representative groups of catchments having similar hydrological response characteristics in the upper Murrumbidgee Catchment. In the assessment of model structure suitability, the consistency (or variability) of catchment response over time and space in model performance and parameter values has been investigated to detect problems related to the temporal and spatial variability of the model accuracy. The predictive error caused by model uncertainty was evaluated through analysis of the variability of the model performance and parameters. A graphical comparison of model residuals, effective rainfall estimates and hydrographs was used to determine a model's ability related to systematic model deviation between simulated and observed behaviours and general behavioural differences in the timing and magnitude of peak flows. The model's predictability was very sensitive to catchment response characteristics. The linear module performs reasonably well in the wetter catchments but has considerable difficulties when applied to the drier catchments where a hydrologic response is dominated by quick flow. The non-linear module has a potential limitation in its capacity to capture non-linear processes for converting observed rainfall into effective rainfall in both the wetter and drier catchments. The comparative study based on a better quantification of the accuracy and precision of hydrological modelling predictions yields a better understanding for the potential improvement of model deficiencies.

  6. PIV Uncertainty Methodologies for CFD Code Validation at the MIR Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabharwall, Piyush; Skifton, Richard; Stoots, Carl

    2013-12-01

    Currently, computational fluid dynamics (CFD) is widely used in the nuclear thermal hydraulics field for design and safety analyses. To validate CFD codes, high quality multi dimensional flow field data are essential. The Matched Index of Refraction (MIR) Flow Facility at Idaho National Laboratory has a unique capability to contribute to the development of validated CFD codes through the use of Particle Image Velocimetry (PIV). The significance of the MIR facility is that it permits non intrusive velocity measurement techniques, such as PIV, through complex models without requiring probes and other instrumentation that disturb the flow. At the heart ofmore » any PIV calculation is the cross-correlation, which is used to estimate the displacement of particles in some small part of the image over the time span between two images. This image displacement is indicated by the location of the largest peak. In the MIR facility, uncertainty quantification is a challenging task due to the use of optical measurement techniques. Currently, this study is developing a reliable method to analyze uncertainty and sensitivity of the measured data and develop a computer code to automatically analyze the uncertainty/sensitivity of the measured data. The main objective of this study is to develop a well established uncertainty quantification method for the MIR Flow Facility, which consists of many complicated uncertainty factors. In this study, the uncertainty sources are resolved in depth by categorizing them into uncertainties from the MIR flow loop and PIV system (including particle motion, image distortion, and data processing). Then, each uncertainty source is mathematically modeled or adequately defined. Finally, this study will provide a method and procedure to quantify the experimental uncertainty in the MIR Flow Facility with sample test results.« less

  7. Wave-optics uncertainty propagation and regression-based bias model in GNSS radio occultation bending angle retrievals

    NASA Astrophysics Data System (ADS)

    Gorbunov, Michael E.; Kirchengast, Gottfried

    2018-01-01

    A new reference occultation processing system (rOPS) will include a Global Navigation Satellite System (GNSS) radio occultation (RO) retrieval chain with integrated uncertainty propagation. In this paper, we focus on wave-optics bending angle (BA) retrieval in the lower troposphere and introduce (1) an empirically estimated boundary layer bias (BLB) model then employed to reduce the systematic uncertainty of excess phases and bending angles in about the lowest 2 km of the troposphere and (2) the estimation of (residual) systematic uncertainties and their propagation together with random uncertainties from excess phase to bending angle profiles. Our BLB model describes the estimated bias of the excess phase transferred from the estimated bias of the bending angle, for which the model is built, informed by analyzing refractivity fluctuation statistics shown to induce such biases. The model is derived from regression analysis using a large ensemble of Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) RO observations and concurrent European Centre for Medium-Range Weather Forecasts (ECMWF) analysis fields. It is formulated in terms of predictors and adaptive functions (powers and cross products of predictors), where we use six main predictors derived from observations: impact altitude, latitude, bending angle and its standard deviation, canonical transform (CT) amplitude, and its fluctuation index. Based on an ensemble of test days, independent of the days of data used for the regression analysis to establish the BLB model, we find the model very effective for bias reduction and capable of reducing bending angle and corresponding refractivity biases by about a factor of 5. The estimated residual systematic uncertainty, after the BLB profile subtraction, is lower bounded by the uncertainty from the (indirect) use of ECMWF analysis fields but is significantly lower than the systematic uncertainty without BLB correction. The systematic and random uncertainties are propagated from excess phase to bending angle profiles, using a perturbation approach and the wave-optical method recently introduced by Gorbunov and Kirchengast (2015), starting with estimated excess phase uncertainties. The results are encouraging and this uncertainty propagation approach combined with BLB correction enables a robust reduction and quantification of the uncertainties of excess phases and bending angles in the lower troposphere.

  8. Modeling dynamics of western juniper under climate change in a semiarid ecosystem

    NASA Astrophysics Data System (ADS)

    Shrestha, R.; Glenn, N. F.; Flores, A. N.

    2013-12-01

    Modeling future vegetation dynamics in response to climate change and disturbances such as fire relies heavily on model parameterization. Fine-scale field-based measurements can provide the necessary parameters for constraining models at a larger scale. But the time- and labor-intensive nature of field-based data collection leads to sparse sampling and significant spatial uncertainties in retrieved parameters. In this study we quantify the fine-scale carbon dynamics and uncertainty of juniper woodland in the Reynolds Creek Experimental Watershed (RCEW) in southern Idaho, which is a proposed critical zone observatory (CZO) site for soil carbon processes. We leverage field-measured vegetation data along with airborne lidar and timeseries Landsat imagery to initialize a state-and-transition model (VDDT) and a process-based fire-model (FlamMap) to examine the vegetation dynamics in response to stochastic fire events and climate change. We utilize recently developed and novel techniques to measure biomass and canopy characteristics of western juniper at the individual tree scale using terrestrial and airborne laser scanning techniques in RCEW. These fine-scale data are upscaled across the watershed for the VDDT and FlamMap models. The results will immediately improve our understanding of fine-scale dynamics and carbon stocks and fluxes of woody vegetation in a semi-arid ecosystem. Moreover, quantification of uncertainty will also provide a basis for generating ensembles of spatially-explicit alternative scenarios to guide future land management decisions in the region.

  9. Teaching Monte Carlo Strategies for Earth System Modelling using a Guided Group-Learning Approach in the Classroom

    NASA Astrophysics Data System (ADS)

    Wagener, T.; Pianosi, F.; Woods, R. A.

    2016-12-01

    The need for quantifying uncertainty in earth system modelling has now been well established on both scientific and policy-making grounds. There is an urgent need to bring the skills and tools needed for doing so into practice. However, such topics are currently largely constrained to specialist graduate courses or to short courses for PhD students. Teaching the advanced skills needed for implementing and for using uncertainty analysis is difficult because students feel that it is inaccessible and it can be boring if presented using frontal teaching in the classroom. While we have made significant advancement in sharing teaching material, sometimes even including teaching notes (Wagener et al., 2012, Hydrology and Earth System Sciences), there is great need for understanding how we can bring such advanced topics into the undergraduate (and even graduate) curriculum in an effective manner. We present the results of our efforts to teach Matlab-based tools for uncertainty quantification in earth system modelling in a civil engineering undergraduate course. We use the example of teaching Monte Carlo strategies, the basis for the most widely used uncertainty quantification approaches, through the use of guided group-learning activities in the classroom. We utilize a three-step approach: [1] basic introduction to the problem, [2] guided group-learning to develop a possible solution, [3] comparison of possible solutions with state-of-the-art algorithms across groups. Our initial testing in an undergraduate course suggests that (i) overall students find a group-learning approach more engaging, (ii) that different students take charge of advancing the discussion at different stages or for different problems, and (iii) that making appropriate suggestions (facilitator) to guide the discussion keeps the speed of advancement sufficiently high. We present the approach, our initial results and suggest how a wider course on earth system modelling could be formulated in this manner.

  10. IMPACT: Integrated Modeling of Perturbations in Atmospheres for Conjunction Tracking

    NASA Astrophysics Data System (ADS)

    Koller, J.; Brennan, S.; Godinez, H. C.; Higdon, D. M.; Klimenko, A.; Larsen, B.; Lawrence, E.; Linares, R.; McLaughlin, C. A.; Mehta, P. M.; Palmer, D.; Ridley, A. J.; Shoemaker, M.; Sutton, E.; Thompson, D.; Walker, A.; Wohlberg, B.

    2013-12-01

    Low-Earth orbiting satellites suffer from atmospheric drag due to thermospheric density which changes on the order of several magnitudes especially during space weather events. Solar flares, precipitating particles and ionospheric currents cause the upper atmosphere to heat up, redistribute, and cool again. These processes are intrinsically included in empirical models, e.g. MSIS and Jacchia-Bowman type models. However, sensitivity analysis has shown that atmospheric drag has the highest influence on satellite conjunction analysis and empirical model still do not adequately represent a desired accuracy. Space debris and collision avoidance have become an increasingly operational reality. It is paramount to accurately predict satellite orbits and include drag effect driven by space weather. The IMPACT project (Integrated Modeling of Perturbations in Atmospheres for Conjunction Tracking), funded with over $5 Million by the Los Alamos Laboratory Directed Research and Development office, has the goal to develop an integrated system of atmospheric drag modeling, orbit propagation, and conjunction analysis with detailed uncertainty quantification to address the space debris and collision avoidance problem. Now with over two years into the project, we have developed an integrated solution combining physics-based density modeling of the upper atmosphere between 120-700 km altitude, satellite drag forecasting for quiet and disturbed geomagnetic conditions, and conjunction analysis with non-Gaussian uncertainty quantification. We are employing several novel approaches including a unique observational sensor developed at Los Alamos; machine learning with a support-vector machine approach of the coupling between solar drivers of the upper atmosphere and satellite drag; rigorous data assimilative modeling using a physics-based approach instead of empirical modeling of the thermosphere; and a computed-tomography method for extracting temporal maps of thermospheric densities using ground based observations. The developed IMPACT framework is an open research framework enabling the exchange and testing of a variety of atmospheric density models, orbital propagators, drag coefficient models, ground based observations, etc. and study their effect on conjunctions and uncertainty predictions. The framework is based on a modern service-oriented architecture controlled by a web interface and providing 3D visualizations. The goal of this project is to revolutionize the ability to monitor and track space objects during highly disturbed space weather conditions, provide suitable forecasts for satellite drag conditions and conjunction analysis, and enable the exchange of models, codes, and data in an open research environment. We will present capabilities and results of the IMPACT framework including a demo of the control interface and visualizations.

  11. Final Technical Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knio, Omar M.

    QUEST is a SciDAC Institute comprising Sandia National Laboratories, Los Alamos National Laboratory, University of Southern California, Massachusetts Institute of Technology, University of Texas at Austin, and Duke University. The mission of QUEST is to: (1) develop a broad class of uncertainty quantification (UQ) methods/tools, and (2) provide UQ expertise and software to other SciDAC projects, thereby enabling/guiding their UQ activities. The Duke effort focused on the development of algorithms and utility software for non-intrusive sparse UQ representations, and on participation in the organization of annual workshops and tutorials to disseminate UQ tools to the community, and to gather inputmore » in order to adapt approaches to the needs of SciDAC customers. In particular, fundamental developments were made in (a) multiscale stochastic preconditioners, (b) gradient-based approaches to inverse problems, (c) adaptive pseudo-spectral approximations, (d) stochastic limit cycles, and (e) sensitivity analysis tools for noisy systems. In addition, large-scale demonstrations were performed, namely in the context of ocean general circulation models.« less

  12. Matthew Reynolds | NREL

    Science.gov Websites

    food science. Matthew's research at NREL is focused on applying uncertainty quantification techniques . Research Interests Uncertainty quantification Computational multilinear algebra Approximation theory of and the Canonical Tensor Decomposition, Journal of Computational Physics (2017) Randomized Alternating

  13. Simulation Credibility: Advances in Verification, Validation, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Mehta, Unmeel B. (Editor); Eklund, Dean R.; Romero, Vicente J.; Pearce, Jeffrey A.; Keim, Nicholas S.

    2016-01-01

    Decision makers and other users of simulations need to know quantified simulation credibility to make simulation-based critical decisions and effectively use simulations, respectively. The credibility of a simulation is quantified by its accuracy in terms of uncertainty, and the responsibility of establishing credibility lies with the creator of the simulation. In this volume, we present some state-of-the-art philosophies, principles, and frameworks. The contributing authors involved in this publication have been dedicated to advancing simulation credibility. They detail and provide examples of key advances over the last 10 years in the processes used to quantify simulation credibility: verification, validation, and uncertainty quantification. The philosophies and assessment methods presented here are anticipated to be useful to other technical communities conducting continuum physics-based simulations; for example, issues related to the establishment of simulation credibility in the discipline of propulsion are discussed. We envision that simulation creators will find this volume very useful to guide and assist them in quantitatively conveying the credibility of their simulations.

  14. Rapid Non-Gaussian Uncertainty Quantification of Seismic Velocity Models and Images

    NASA Astrophysics Data System (ADS)

    Ely, G.; Malcolm, A. E.; Poliannikov, O. V.

    2017-12-01

    Conventional seismic imaging typically provides a single estimate of the subsurface without any error bounds. Noise in the observed raw traces as well as the uncertainty of the velocity model directly impact the uncertainty of the final seismic image and its resulting interpretation. We present a Bayesian inference framework to quantify uncertainty in both the velocity model and seismic images, given noise statistics of the observed data.To estimate velocity model uncertainty, we combine the field expansion method, a fast frequency domain wave equation solver, with the adaptive Metropolis-Hastings algorithm. The speed of the field expansion method and its reduced parameterization allows us to perform the tens or hundreds of thousands of forward solves needed for non-parametric posterior estimations. We then migrate the observed data with the distribution of velocity models to generate uncertainty estimates of the resulting subsurface image. This procedure allows us to create both qualitative descriptions of seismic image uncertainty and put error bounds on quantities of interest such as the dip angle of a subduction slab or thickness of a stratigraphic layer.

  15. Hydrologic drought prediction under climate change: Uncertainty modeling with Dempster-Shafer and Bayesian approaches

    NASA Astrophysics Data System (ADS)

    Raje, Deepashree; Mujumdar, P. P.

    2010-09-01

    Representation and quantification of uncertainty in climate change impact studies are a difficult task. Several sources of uncertainty arise in studies of hydrologic impacts of climate change, such as those due to choice of general circulation models (GCMs), scenarios and downscaling methods. Recently, much work has focused on uncertainty quantification and modeling in regional climate change impacts. In this paper, an uncertainty modeling framework is evaluated, which uses a generalized uncertainty measure to combine GCM, scenario and downscaling uncertainties. The Dempster-Shafer (D-S) evidence theory is used for representing and combining uncertainty from various sources. A significant advantage of the D-S framework over the traditional probabilistic approach is that it allows for the allocation of a probability mass to sets or intervals, and can hence handle both aleatory or stochastic uncertainty, and epistemic or subjective uncertainty. This paper shows how the D-S theory can be used to represent beliefs in some hypotheses such as hydrologic drought or wet conditions, describe uncertainty and ignorance in the system, and give a quantitative measurement of belief and plausibility in results. The D-S approach has been used in this work for information synthesis using various evidence combination rules having different conflict modeling approaches. A case study is presented for hydrologic drought prediction using downscaled streamflow in the Mahanadi River at Hirakud in Orissa, India. Projections of n most likely monsoon streamflow sequences are obtained from a conditional random field (CRF) downscaling model, using an ensemble of three GCMs for three scenarios, which are converted to monsoon standardized streamflow index (SSFI-4) series. This range is used to specify the basic probability assignment (bpa) for a Dempster-Shafer structure, which represents uncertainty associated with each of the SSFI-4 classifications. These uncertainties are then combined across GCMs and scenarios using various evidence combination rules given by the D-S theory. A Bayesian approach is also presented for this case study, which models the uncertainty in projected frequencies of SSFI-4 classifications by deriving a posterior distribution for the frequency of each classification, using an ensemble of GCMs and scenarios. Results from the D-S and Bayesian approaches are compared, and relative merits of each approach are discussed. Both approaches show an increasing probability of extreme, severe and moderate droughts and decreasing probability of normal and wet conditions in Orissa as a result of climate change.

  16. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less

  17. The Challenges of Credible Thermal Protection System Reliability Quantification

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.

    2013-01-01

    The paper discusses several of the challenges associated with developing a credible reliability estimate for a human-rated crew capsule thermal protection system. The process of developing such a credible estimate is subject to the quantification, modeling and propagation of numerous uncertainties within a probabilistic analysis. The development of specific investment recommendations, to improve the reliability prediction, among various potential testing and programmatic options is then accomplished through Bayesian analysis.

  18. Atmospheric CO2 inversions on the mesoscale using data-driven prior uncertainties: quantification of the European terrestrial CO2 fluxes

    NASA Astrophysics Data System (ADS)

    Kountouris, Panagiotis; Gerbig, Christoph; Rödenbeck, Christian; Karstens, Ute; Koch, Thomas F.; Heimann, Martin

    2018-03-01

    Optimized biogenic carbon fluxes for Europe were estimated from high-resolution regional-scale inversions, utilizing atmospheric CO2 measurements at 16 stations for the year 2007. Additional sensitivity tests with different data-driven error structures were performed. As the atmospheric network is rather sparse and consequently contains large spatial gaps, we use a priori biospheric fluxes to further constrain the inversions. The biospheric fluxes were simulated by the Vegetation Photosynthesis and Respiration Model (VPRM) at a resolution of 0.1° and optimized against eddy covariance data. Overall we estimate an a priori uncertainty of 0.54 GtC yr-1 related to the poor spatial representation between the biospheric model and the ecosystem sites. The sink estimated from the atmospheric inversions for the area of Europe (as represented in the model domain) ranges between 0.23 and 0.38 GtC yr-1 (0.39 and 0.71 GtC yr-1 up-scaled to geographical Europe). This is within the range of posterior flux uncertainty estimates of previous studies using ground-based observations.

  19. Stochastic collocation using Kronrod-Patterson-Hermite quadrature with moderate delay for subsurface flow and transport

    NASA Astrophysics Data System (ADS)

    Liao, Q.; Tchelepi, H.; Zhang, D.

    2015-12-01

    Uncertainty quantification aims at characterizing the impact of input parameters on the output responses and plays an important role in many areas including subsurface flow and transport. In this study, a sparse grid collocation approach, which uses a nested Kronrod-Patterson-Hermite quadrature rule with moderate delay for Gaussian random parameters, is proposed to quantify the uncertainty of model solutions. The conventional stochastic collocation method serves as a promising non-intrusive approach and has drawn a great deal of interests. The collocation points are usually chosen to be Gauss-Hermite quadrature nodes, which are naturally unnested. The Kronrod-Patterson-Hermite nodes are shown to be more efficient than the Gauss-Hermite nodes due to nestedness. We propose a Kronrod-Patterson-Hermite rule with moderate delay to further improve the performance. Our study demonstrates the effectiveness of the proposed method for uncertainty quantification through subsurface flow and transport examples.

  20. Time-Resolved Particle Image Velocimetry Measurements with Wall Shear Stress and Uncertainty Quantification for the FDA Nozzle Model.

    PubMed

    Raben, Jaime S; Hariharan, Prasanna; Robinson, Ronald; Malinauskas, Richard; Vlachos, Pavlos P

    2016-03-01

    We present advanced particle image velocimetry (PIV) processing, post-processing, and uncertainty estimation techniques to support the validation of computational fluid dynamics analyses of medical devices. This work is an extension of a previous FDA-sponsored multi-laboratory study, which used a medical device mimicking geometry referred to as the FDA benchmark nozzle model. Experimental measurements were performed using time-resolved PIV at five overlapping regions of the model for Reynolds numbers in the nozzle throat of 500, 2000, 5000, and 8000. Images included a twofold increase in spatial resolution in comparison to the previous study. Data was processed using ensemble correlation, dynamic range enhancement, and phase correlations to increase signal-to-noise ratios and measurement accuracy, and to resolve flow regions with large velocity ranges and gradients, which is typical of many blood-contacting medical devices. Parameters relevant to device safety, including shear stress at the wall and in bulk flow, were computed using radial basis functions. In addition, in-field spatially resolved pressure distributions, Reynolds stresses, and energy dissipation rates were computed from PIV measurements. Velocity measurement uncertainty was estimated directly from the PIV correlation plane, and uncertainty analysis for wall shear stress at each measurement location was performed using a Monte Carlo model. Local velocity uncertainty varied greatly and depended largely on local conditions such as particle seeding, velocity gradients, and particle displacements. Uncertainty in low velocity regions in the sudden expansion section of the nozzle was greatly reduced by over an order of magnitude when dynamic range enhancement was applied. Wall shear stress uncertainty was dominated by uncertainty contributions from velocity estimations, which were shown to account for 90-99% of the total uncertainty. This study provides advancements in the PIV processing methodologies over the previous work through increased PIV image resolution, use of robust image processing algorithms for near-wall velocity measurements and wall shear stress calculations, and uncertainty analyses for both velocity and wall shear stress measurements. The velocity and shear stress analysis, with spatially distributed uncertainty estimates, highlights the challenges of flow quantification in medical devices and provides potential methods to overcome such challenges.

  1. Detailed Uncertainty Analysis of the Ares I A106 Liftoff/Transition Database

    NASA Technical Reports Server (NTRS)

    Hanke, Jeremy L.

    2011-01-01

    The Ares I A106 Liftoff/Transition Force and Moment Aerodynamics Database describes the aerodynamics of the Ares I Crew Launch Vehicle (CLV) from the moment of liftoff through the transition from high to low total angles of attack at low subsonic Mach numbers. The database includes uncertainty estimates that were developed using a detailed uncertainty quantification procedure. The Ares I Aerodynamics Panel developed both the database and the uncertainties from wind tunnel test data acquired in the NASA Langley Research Center s 14- by 22-Foot Subsonic Wind Tunnel Test 591 using a 1.75 percent scale model of the Ares I and the tower assembly. The uncertainty modeling contains three primary uncertainty sources: experimental uncertainty, database modeling uncertainty, and database query interpolation uncertainty. The final database and uncertainty model represent a significant improvement in the quality of the aerodynamic predictions for this regime of flight over the estimates previously used by the Ares Project. The maximum possible aerodynamic force pushing the vehicle towards the launch tower assembly in a dispersed case using this database saw a 40 percent reduction from the worst-case scenario in previously released data for Ares I.

  2. Management of groundwater in-situ bioremediation system using reactive transport modelling under parametric uncertainty: field scale application

    NASA Astrophysics Data System (ADS)

    Verardo, E.; Atteia, O.; Rouvreau, L.

    2015-12-01

    In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.

  3. Use of NARCCAP data to characterize regional climate uncertainty in the impact of global climate change on large river fish population: Missouri River sturgeon example

    NASA Astrophysics Data System (ADS)

    Anderson, C. J.; Wildhaber, M. L.; Wikle, C. K.; Moran, E. H.; Franz, K. J.; Dey, R.

    2012-12-01

    Climate change operates over a broad range of spatial and temporal scales. Understanding the effects of change on ecosystems requires accounting for the propagation of information and uncertainty across these scales. For example, to understand potential climate change effects on fish populations in riverine ecosystems, climate conditions predicted by course-resolution atmosphere-ocean global climate models must first be translated to the regional climate scale. In turn, this regional information is used to force watershed models, which are used to force river condition models, which impact the population response. A critical challenge in such a multiscale modeling environment is to quantify sources of uncertainty given the highly nonlinear nature of interactions between climate variables and the individual organism. We use a hierarchical modeling approach for accommodating uncertainty in multiscale ecological impact studies. This framework allows for uncertainty due to system models, model parameter settings, and stochastic parameterizations. This approach is a hybrid between physical (deterministic) downscaling and statistical downscaling, recognizing that there is uncertainty in both. We use NARCCAP data to determine confidence the capability of climate models to simulate relevant processes and to quantify regional climate variability within the context of the hierarchical model of uncertainty quantification. By confidence, we mean the ability of the regional climate model to replicate observed mechanisms. We use the NCEP-driven simulations for this analysis. This provides a base from which regional change can be categorized as either a modification of previously observed mechanisms or emergence of new processes. The management implications for these categories of change are significantly different in that procedures to address impacts from existing processes may already be known and need adjustment; whereas, an emergent processes may require new management strategies. The results from hierarchical analysis of uncertainty are used to study the relative change in weights of the endangered Missouri River pallid sturgeon (Scaphirhynchus albus) under a 21st century climate scenario.

  4. Bayesian Methods for Effective Field Theories

    NASA Astrophysics Data System (ADS)

    Wesolowski, Sarah

    Microscopic predictions of the properties of atomic nuclei have reached a high level of precision in the past decade. This progress mandates improved uncertainty quantification (UQ) for a robust comparison of experiment with theory. With the uncertainty from many-body methods under control, calculations are now sensitive to the input inter-nucleon interactions. These interactions include parameters that must be fit to experiment, inducing both uncertainty from the fit and from missing physics in the operator structure of the Hamiltonian. Furthermore, the implementation of the inter-nucleon interactions is not unique, which presents the additional problem of assessing results using different interactions. Effective field theories (EFTs) take advantage of a separation of high- and low-energy scales in the problem to form a power-counting scheme that allows the organization of terms in the Hamiltonian based on their expected contribution to observable predictions. This scheme gives a natural framework for quantification of uncertainty due to missing physics. The free parameters of the EFT, called the low-energy constants (LECs), must be fit to data, but in a properly constructed EFT these constants will be natural-sized, i.e., of order unity. The constraints provided by the EFT, namely the size of the systematic uncertainty from truncation of the theory and the natural size of the LECs, are assumed information even before a calculation is performed or a fit is done. Bayesian statistical methods provide a framework for treating uncertainties that naturally incorporates prior information as well as putting stochastic and systematic uncertainties on an equal footing. For EFT UQ Bayesian methods allow the relevant EFT properties to be incorporated quantitatively as prior probability distribution functions (pdfs). Following the logic of probability theory, observable quantities and underlying physical parameters such as the EFT breakdown scale may be expressed as pdfs that incorporate the prior pdfs. Problems of model selection, such as distinguishing between competing EFT implementations, are also natural in a Bayesian framework. In this thesis we focus on two complementary topics for EFT UQ using Bayesian methods--quantifying EFT truncation uncertainty and parameter estimation for LECs. Using the order-by-order calculations and underlying EFT constraints as prior information, we show how to estimate EFT truncation uncertainties. We then apply the result to calculating truncation uncertainties on predictions of nucleon-nucleon scattering in chiral effective field theory. We apply model-checking diagnostics to our calculations to ensure that the statistical model of truncation uncertainty produces consistent results. A framework for EFT parameter estimation based on EFT convergence properties and naturalness is developed which includes a series of diagnostics to ensure the extraction of the maximum amount of available information from data to estimate LECs with minimal bias. We develop this framework using model EFTs and apply it to the problem of extrapolating lattice quantum chromodynamics results for the nucleon mass. We then apply aspects of the parameter estimation framework to perform case studies in chiral EFT parameter estimation, investigating a possible operator redundancy at fourth order in the chiral expansion and the appropriate inclusion of truncation uncertainty in estimating LECs.

  5. Computer Model Inversion and Uncertainty Quantification in the Geosciences

    NASA Astrophysics Data System (ADS)

    White, Jeremy T.

    The subject of this dissertation is use of computer models as data analysis tools in several different geoscience settings, including integrated surface water/groundwater modeling, tephra fallout modeling, geophysical inversion, and hydrothermal groundwater modeling. The dissertation is organized into three chapters, which correspond to three individual publication manuscripts. In the first chapter, a linear framework is developed to identify and estimate the potential predictive consequences of using a simple computer model as a data analysis tool. The framework is applied to a complex integrated surface-water/groundwater numerical model with thousands of parameters. Several types of predictions are evaluated, including particle travel time and surface-water/groundwater exchange volume. The analysis suggests that model simplifications have the potential to corrupt many types of predictions. The implementation of the inversion, including how the objective function is formulated, what minimum of the objective function value is acceptable, and how expert knowledge is enforced on parameters, can greatly influence the manifestation of model simplification. Depending on the prediction, failure to specifically address each of these important issues during inversion is shown to degrade the reliability of some predictions. In some instances, inversion is shown to increase, rather than decrease, the uncertainty of a prediction, which defeats the purpose of using a model as a data analysis tool. In the second chapter, an efficient inversion and uncertainty quantification approach is applied to a computer model of volcanic tephra transport and deposition. The computer model simulates many physical processes related to tephra transport and fallout. The utility of the approach is demonstrated for two eruption events. In both cases, the importance of uncertainty quantification is highlighted by exposing the variability in the conditioning provided by the observations used for inversion. The worth of different types of tephra data to reduce parameter uncertainty is evaluated, as is the importance of different observation error models. The analyses reveal the importance using tephra granulometry data for inversion, which results in reduced uncertainty for most eruption parameters. In the third chapter, geophysical inversion is combined with hydrothermal modeling to evaluate the enthalpy of an undeveloped geothermal resource in a pull-apart basin located in southeastern Armenia. A high-dimensional gravity inversion is used to define the depth to the contact between the lower-density valley fill sediments and the higher-density surrounding host rock. The inverted basin depth distribution was used to define the hydrostratigraphy for the coupled groundwater-flow and heat-transport model that simulates the circulation of hydrothermal fluids in the system. Evaluation of several different geothermal system configurations indicates that the most likely system configuration is a low-enthalpy, liquid-dominated geothermal system.

  6. Temporal models for the episodic volcanism of Campi Flegrei caldera (Italy) with uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Bevilacqua, Andrea; Flandoli, Franco; Neri, Augusto; Isaia, Roberto; Vitale, Stefano

    2016-11-01

    After the large-scale event of Neapolitan Yellow Tuff ( 15 ka B.P.), intense and mostly explosive volcanism has occurred within and along the boundaries of the Campi Flegrei caldera (Italy). Eruptions occurred closely spaced in time, over periods from a few centuries to a few millennia, and were alternated with periods of quiescence lasting up to several millennia. Often events also occurred closely in space, thus generating a cluster of events. This study had two main objectives: (1) to describe the uncertainty in the geologic record by using a quantitative model and (2) to develop, based on the uncertainty assessment, a long-term subdomain specific temporal probability model that describes the temporal and spatial eruptive behavior of the caldera. In particular, the study adopts a space-time doubly stochastic nonhomogeneous Poisson-type model with a local self-excitation feature able to generate clustering of events which are consistent with the reconstructed record of Campi Flegrei. Results allow the evaluation of similarities and differences between the three epochs of activity as well as to derive eruptive base rate of the caldera and its capacity to generate clusters of events. The temporal probability model is also used to investigate the effect of the most recent eruption of Monte Nuovo (A.D. 1538) in a possible reactivation of the caldera and to estimate the time to the next eruption under different volcanological and modeling assumptions.

  7. Towards robust quantification and reduction of uncertainty in hydrologic predictions: Integration of particle Markov chain Monte Carlo and factorial polynomial chaos expansion

    NASA Astrophysics Data System (ADS)

    Wang, S.; Huang, G. H.; Baetz, B. W.; Ancell, B. C.

    2017-05-01

    The particle filtering techniques have been receiving increasing attention from the hydrologic community due to its ability to properly estimate model parameters and states of nonlinear and non-Gaussian systems. To facilitate a robust quantification of uncertainty in hydrologic predictions, it is necessary to explicitly examine the forward propagation and evolution of parameter uncertainties and their interactions that affect the predictive performance. This paper presents a unified probabilistic framework that merges the strengths of particle Markov chain Monte Carlo (PMCMC) and factorial polynomial chaos expansion (FPCE) algorithms to robustly quantify and reduce uncertainties in hydrologic predictions. A Gaussian anamorphosis technique is used to establish a seamless bridge between the data assimilation using the PMCMC and the uncertainty propagation using the FPCE through a straightforward transformation of posterior distributions of model parameters. The unified probabilistic framework is applied to the Xiangxi River watershed of the Three Gorges Reservoir (TGR) region in China to demonstrate its validity and applicability. Results reveal that the degree of spatial variability of soil moisture capacity is the most identifiable model parameter with the fastest convergence through the streamflow assimilation process. The potential interaction between the spatial variability in soil moisture conditions and the maximum soil moisture capacity has the most significant effect on the performance of streamflow predictions. In addition, parameter sensitivities and interactions vary in magnitude and direction over time due to temporal and spatial dynamics of hydrologic processes.

  8. A Probabilistic Framework for Peptide and Protein Quantification from Data-Dependent and Data-Independent LC-MS Proteomics Experiments

    PubMed Central

    Richardson, Keith; Denny, Richard; Hughes, Chris; Skilling, John; Sikora, Jacek; Dadlez, Michał; Manteca, Angel; Jung, Hye Ryung; Jensen, Ole Nørregaard; Redeker, Virginie; Melki, Ronald; Langridge, James I.; Vissers, Johannes P.C.

    2013-01-01

    A probability-based quantification framework is presented for the calculation of relative peptide and protein abundance in label-free and label-dependent LC-MS proteomics data. The results are accompanied by credible intervals and regulation probabilities. The algorithm takes into account data uncertainties via Poisson statistics modified by a noise contribution that is determined automatically during an initial normalization stage. Protein quantification relies on assignments of component peptides to the acquired data. These assignments are generally of variable reliability and may not be present across all of the experiments comprising an analysis. It is also possible for a peptide to be identified to more than one protein in a given mixture. For these reasons the algorithm accepts a prior probability of peptide assignment for each intensity measurement. The model is constructed in such a way that outliers of any type can be automatically reweighted. Two discrete normalization methods can be employed. The first method is based on a user-defined subset of peptides, while the second method relies on the presence of a dominant background of endogenous peptides for which the concentration is assumed to be unaffected. Normalization is performed using the same computational and statistical procedures employed by the main quantification algorithm. The performance of the algorithm will be illustrated on example data sets, and its utility demonstrated for typical proteomics applications. The quantification algorithm supports relative protein quantification based on precursor and product ion intensities acquired by means of data-dependent methods, originating from all common isotopically-labeled approaches, as well as label-free ion intensity-based data-independent methods. PMID:22871168

  9. UQTk Version 3.0.3 User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sargsyan, Khachik; Safta, Cosmin; Chowdhary, Kamaljit Singh

    2017-05-01

    The UQ Toolkit (UQTk) is a collection of libraries and tools for the quantification of uncertainty in numerical model predictions. Version 3.0.3 offers intrusive and non-intrusive methods for propagating input uncertainties through computational models, tools for sen- sitivity analysis, methods for sparse surrogate construction, and Bayesian inference tools for inferring parameters from experimental data. This manual discusses the download and installation process for UQTk, provides pointers to the UQ methods used in the toolkit, and describes some of the examples provided with the toolkit.

  10. A surrogate-based sensitivity quantification and Bayesian inversion of a regional groundwater flow model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.; Amerjeed, Mansoor

    2018-02-01

    Bayesian inference using Markov Chain Monte Carlo (MCMC) provides an explicit framework for stochastic calibration of hydrogeologic models accounting for uncertainties; however, the MCMC sampling entails a large number of model calls, and could easily become computationally unwieldy if the high-fidelity hydrogeologic model simulation is time consuming. This study proposes a surrogate-based Bayesian framework to address this notorious issue, and illustrates the methodology by inverse modeling a regional MODFLOW model. The high-fidelity groundwater model is approximated by a fast statistical model using Bagging Multivariate Adaptive Regression Spline (BMARS) algorithm, and hence the MCMC sampling can be efficiently performed. In this study, the MODFLOW model is developed to simulate the groundwater flow in an arid region of Oman consisting of mountain-coast aquifers, and used to run representative simulations to generate training dataset for BMARS model construction. A BMARS-based Sobol' method is also employed to efficiently calculate input parameter sensitivities, which are used to evaluate and rank their importance for the groundwater flow model system. According to sensitivity analysis, insensitive parameters are screened out of Bayesian inversion of the MODFLOW model, further saving computing efforts. The posterior probability distribution of input parameters is efficiently inferred from the prescribed prior distribution using observed head data, demonstrating that the presented BMARS-based Bayesian framework is an efficient tool to reduce parameter uncertainties of a groundwater system.

  11. Data Assimilation - Advances and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.

    2014-07-30

    This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less

  12. Gap Size Uncertainty Quantification in Advanced Gas Reactor TRISO Fuel Irradiation Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pham, Binh T.; Einerson, Jeffrey J.; Hawkes, Grant L.

    The Advanced Gas Reactor (AGR)-3/4 experiment is the combination of the third and fourth tests conducted within the tristructural isotropic fuel development and qualification research program. The AGR-3/4 test consists of twelve independent capsules containing a fuel stack in the center surrounded by three graphite cylinders and shrouded by a stainless steel shell. This capsule design enables temperature control of both the fuel and the graphite rings by varying the neon/helium gas mixture flowing through the four resulting gaps. Knowledge of fuel and graphite temperatures is crucial for establishing the functional relationship between fission product release and irradiation thermal conditions.more » These temperatures are predicted for each capsule using the commercial finite-element heat transfer code ABAQUS. Uncertainty quantification reveals that the gap size uncertainties are among the dominant factors contributing to predicted temperature uncertainty due to high input sensitivity and uncertainty. Gap size uncertainty originates from the fact that all gap sizes vary with time due to dimensional changes of the fuel compacts and three graphite rings caused by extended exposure to high temperatures and fast neutron irradiation. Gap sizes are estimated using as-fabricated dimensional measurements at the start of irradiation and post irradiation examination dimensional measurements at the end of irradiation. Uncertainties in these measurements provide a basis for quantifying gap size uncertainty. However, lack of gap size measurements during irradiation and lack of knowledge about the dimension change rates lead to gap size modeling assumptions, which could increase gap size uncertainty. In addition, the dimensional measurements are performed at room temperature, and must be corrected to account for thermal expansion of the materials at high irradiation temperatures. Uncertainty in the thermal expansion coefficients for the graphite materials used in the AGR-3/4 capsules also increases gap size uncertainty. This study focuses on analysis of modeling assumptions and uncertainty sources to evaluate their impacts on the gap size uncertainty.« less

  13. Bayesian forecasting and uncertainty quantifying of stream flows using Metropolis-Hastings Markov Chain Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Hongrui; Wang, Cheng; Wang, Ying; Gao, Xiong; Yu, Chen

    2017-06-01

    This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLE confidence interval and thus more precise estimation by using the related information from regional gage stations. The Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.

  14. Probabilistic models and uncertainty quantification for the ionization reaction rate of atomic Nitrogen

    NASA Astrophysics Data System (ADS)

    Miki, K.; Panesi, M.; Prudencio, E. E.; Prudhomme, S.

    2012-05-01

    The objective in this paper is to analyze some stochastic models for estimating the ionization reaction rate constant of atomic Nitrogen (N + e- → N+ + 2e-). Parameters of the models are identified by means of Bayesian inference using spatially resolved absolute radiance data obtained from the Electric Arc Shock Tube (EAST) wind-tunnel. The proposed methodology accounts for uncertainties in the model parameters as well as physical model inadequacies, providing estimates of the rate constant that reflect both types of uncertainties. We present four different probabilistic models by varying the error structure (either additive or multiplicative) and by choosing different descriptions of the statistical correlation among data points. In order to assess the validity of our methodology, we first present some calibration results obtained with manufactured data and then proceed by using experimental data collected at EAST experimental facility. In order to simulate the radiative signature emitted in the shock-heated air plasma, we use a one-dimensional flow solver with Park's two-temperature model that simulates non-equilibrium effects. We also discuss the implications of the choice of the stochastic model on the estimation of the reaction rate and its uncertainties. Our analysis shows that the stochastic models based on correlated multiplicative errors are the most plausible models among the four models proposed in this study. The rate of the atomic Nitrogen ionization is found to be (6.2 ± 3.3) × 1011 cm3 mol-1 s-1 at 10,000 K.

  15. Design of experiments enhanced statistical process control for wind tunnel check standard testing

    NASA Astrophysics Data System (ADS)

    Phillips, Ben D.

    The current wind tunnel check standard testing program at NASA Langley Research Center is focused on increasing data quality, uncertainty quantification and overall control and improvement of wind tunnel measurement processes. The statistical process control (SPC) methodology employed in the check standard testing program allows for the tracking of variations in measurements over time as well as an overall assessment of facility health. While the SPC approach can and does provide researchers with valuable information, it has certain limitations in the areas of process improvement and uncertainty quantification. It is thought by utilizing design of experiments methodology in conjunction with the current SPC practices that one can efficiently and more robustly characterize uncertainties and develop enhanced process improvement procedures. In this research, methodologies were developed to generate regression models for wind tunnel calibration coefficients, balance force coefficients and wind tunnel flow angularities. The coefficients of these regression models were then tracked in statistical process control charts, giving a higher level of understanding of the processes. The methodology outlined is sufficiently generic such that this research can be applicable to any wind tunnel check standard testing program.

  16. Uncertainty Quantification for Polynomial Systems via Bernstein Expansions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2012-01-01

    This paper presents a unifying framework to uncertainty quantification for systems having polynomial response metrics that depend on both aleatory and epistemic uncertainties. The approach proposed, which is based on the Bernstein expansions of polynomials, enables bounding the range of moments and failure probabilities of response metrics as well as finding supersets of the extreme epistemic realizations where the limits of such ranges occur. These bounds and supersets, whose analytical structure renders them free of approximation error, can be made arbitrarily tight with additional computational effort. Furthermore, this framework enables determining the importance of particular uncertain parameters according to the extent to which they affect the first two moments of response metrics and failure probabilities. This analysis enables determining the parameters that should be considered uncertain as well as those that can be assumed to be constants without incurring significant error. The analytical nature of the approach eliminates the numerical error that characterizes the sampling-based techniques commonly used to propagate aleatory uncertainties as well as the possibility of under predicting the range of the statistic of interest that may result from searching for the best- and worstcase epistemic values via nonlinear optimization or sampling.

  17. Multi-fidelity methods for uncertainty quantification in transport problems

    NASA Astrophysics Data System (ADS)

    Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.

    2016-12-01

    We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.

  18. Uncertainty quantification for optical model parameters

    DOE PAGES

    Lovell, A. E.; Nunes, F. M.; Sarich, J.; ...

    2017-02-21

    Although uncertainty quantification has been making its way into nuclear theory, these methods have yet to be explored in the context of reaction theory. For example, it is well known that different parameterizations of the optical potential can result in different cross sections, but these differences have not been systematically studied and quantified. The purpose of our work is to investigate the uncertainties in nuclear reactions that result from fitting a given model to elastic-scattering data, as well as to study how these uncertainties propagate to the inelastic and transfer channels. We use statistical methods to determine a best fitmore » and create corresponding 95% confidence bands. A simple model of the process is fit to elastic-scattering data and used to predict either inelastic or transfer cross sections. In this initial work, we assume that our model is correct, and the only uncertainties come from the variation of the fit parameters. Here, we study a number of reactions involving neutron and deuteron projectiles with energies in the range of 5–25 MeV/u, on targets with mass A=12–208. We investigate the correlations between the parameters in the fit. The case of deuterons on 12C is discussed in detail: the elastic-scattering fit and the prediction of 12C(d,p) 13C transfer angular distributions, using both uncorrelated and correlated χ 2 minimization functions. The general features for all cases are compiled in a systematic manner to identify trends. This work shows that, in many cases, the correlated χ 2 functions (in comparison to the uncorrelated χ 2 functions) provide a more natural parameterization of the process. These correlated functions do, however, produce broader confidence bands. Further optimization may require improvement in the models themselves and/or more information included in the fit.« less

  19. Inadequacy representation of flamelet-based RANS model for turbulent non-premixed flame

    NASA Astrophysics Data System (ADS)

    Lee, Myoungkyu; Oliver, Todd; Moser, Robert

    2017-11-01

    Stochastic representations for model inadequacy in RANS-based models of non-premixed jet flames are developed and explored. Flamelet-based RANS models are attractive for engineering applications relative to higher-fidelity methods because of their low computational costs. However, the various assumptions inherent in such models introduce errors that can significantly affect the accuracy of computed quantities of interest. In this work, we develop an approach to represent the model inadequacy of the flamelet-based RANS model. In particular, we pose a physics-based, stochastic PDE for the triple correlation of the mixture fraction. This additional uncertain state variable is then used to construct perturbations of the PDF for the instantaneous mixture fraction, which is used to obtain an uncertain perturbation of the flame temperature. A hydrogen-air non-premixed jet flame is used to demonstrate the representation of the inadequacy of the flamelet-based RANS model. This work was supported by DARPA-EQUiPS(Enabling Quantification of Uncertainty in Physical Systems) program.

  20. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Crevillén-García, D.; Power, H.

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  1. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media.

    PubMed

    Crevillén-García, D; Power, H

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  2. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    PubMed Central

    Power, H.

    2017-01-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen–Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error. PMID:28878974

  3. The role of correlations in uncertainty quantification of transportation relevant fuel models

    DOE PAGES

    Fridlyand, Aleksandr; Johnson, Matthew S.; Goldsborough, S. Scott; ...

    2017-02-03

    Large reaction mechanisms are often used to describe the combustion behavior of transportation-relevant fuels like gasoline, where these are typically represented by surrogate blends, e.g., n-heptane/iso-octane/toluene. We describe efforts to quantify the uncertainty in the predictions of such mechanisms at realistic engine conditions, seeking to better understand the robustness of the model as well as the important reaction pathways and their impacts on combustion behavior. In this work, we examine the importance of taking into account correlations among reactions that utilize the same rate rules and those with multiple product channels on forward propagation of uncertainty by Monte Carlo simulations.more » Automated means are developed to generate the uncertainty factor assignment for a detailed chemical kinetic mechanism, by first uniquely identifying each reacting species, then sorting each of the reactions based on the rate rule utilized. Simulation results reveal that in the low temperature combustion regime for iso-octane, the majority of the uncertainty in the model predictions can be attributed to low temperature reactions of the fuel sub-mechanism. The foundational, or small-molecule chemistry (C 0-C 4) only contributes significantly to uncertainties in the predictions at the highest temperatures (Tc=900 K). Accounting for correlations between important reactions is shown to produce non-negligible differences in the estimates of uncertainty. Including correlations among reactions that use the same rate rules increases uncertainty in the model predictions, while accounting for correlations among reactions with multiple branches decreases uncertainty in some cases. Significant non-linear response is observed in the model predictions depending on how the probability distributions of the uncertain rate constants are defined.Finally, we concluded that care must be exercised in defining these probability distributions in order to reduce bias, and physically unrealistic estimates in the forward propagation of uncertainty for a range of UQ activities.« less

  4. V&V framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hills, Richard G.; Maniaci, David Charles; Naughton, Jonathan W.

    2015-09-01

    A Verification and Validation (V&V) framework is presented for the development and execution of coordinated modeling and experimental program s to assess the predictive capability of computational models of complex systems through focused, well structured, and formal processes.The elements of the framework are based on established V&V methodology developed by various organizations including the Department of Energy, National Aeronautics and Space Administration, the American Institute of Aeronautics and Astronautics, and the American Society of Mechanical Engineers. Four main topics are addressed: 1) Program planning based on expert elicitation of the modeling physics requirements, 2) experimental design for model assessment, 3)more » uncertainty quantification for experimental observations and computational model simulations, and 4) assessment of the model predictive capability. The audience for this document includes program planners, modelers, experimentalist, V &V specialist, and customers of the modeling results.« less

  5. Stochastic Coastal/Regional Uncertainty Modelling: a Copernicus marine research project in the framework of Service Evolution

    NASA Astrophysics Data System (ADS)

    Vervatis, Vassilios; De Mey, Pierre; Ayoub, Nadia; Kailas, Marios; Sofianos, Sarantis

    2017-04-01

    The project entitled Stochastic Coastal/Regional Uncertainty Modelling (SCRUM) aims at strengthening CMEMS in the areas of ocean uncertainty quantification, ensemble consistency verification and ensemble data assimilation. The project has been initiated by the University of Athens and LEGOS/CNRS research teams, in the framework of CMEMS Service Evolution. The work is based on stochastic modelling of ocean physics and biogeochemistry in the Bay of Biscay, on an identical sub-grid configuration of the IBI-MFC system in its latest CMEMS operational version V2. In a first step, we use a perturbed tendencies scheme to generate ensembles describing uncertainties in open ocean and on the shelf, focusing on upper ocean processes. In a second step, we introduce two methodologies (i.e. rank histograms and array modes) aimed at checking the consistency of the above ensembles with respect to TAC data and arrays. Preliminary results highlight that wind uncertainties dominate all other atmosphere-ocean sources of model errors. The ensemble spread in medium-range ensembles is approximately 0.01 m for SSH and 0.15 °C for SST, though these values vary depending on season and cross shelf regions. Ecosystem model uncertainties emerging from perturbations in physics appear to be moderately larger than those perturbing the concentration of the biogeochemical compartments, resulting in total chlorophyll spread at about 0.01 mg.m-3. First consistency results show that the model ensemble and the pseudo-ensemble of OSTIA (L4) observation SSTs appear to exhibit nonzero joint probabilities with each other since error vicinities overlap. Rank histograms show that the model ensemble is initially under-dispersive, though results improve in the context of seasonal-range ensembles.

  6. Uncertainty Propagation in OMFIT

    NASA Astrophysics Data System (ADS)

    Smith, Sterling; Meneghini, Orso; Sung, Choongki

    2017-10-01

    A rigorous comparison of power balance fluxes and turbulent model fluxes requires the propagation of uncertainties in the kinetic profiles and their derivatives. Making extensive use of the python uncertainties package, the OMFIT framework has been used to propagate covariant uncertainties to provide an uncertainty in the power balance calculation from the ONETWO code, as well as through the turbulent fluxes calculated by the TGLF code. The covariant uncertainties arise from fitting 1D (constant on flux surface) density and temperature profiles and associated random errors with parameterized functions such as a modified tanh. The power balance and model fluxes can then be compared with quantification of the uncertainties. No effort is made at propagating systematic errors. A case study will be shown for the effects of resonant magnetic perturbations on the kinetic profiles and fluxes at the top of the pedestal. A separate attempt at modeling the random errors with Monte Carlo sampling will be compared to the method of propagating the fitting function parameter covariant uncertainties. Work supported by US DOE under DE-FC02-04ER54698, DE-FG2-95ER-54309, DE-SC 0012656.

  7. Uncertainty Quantification for Ice Sheet Science and Sea Level Projections

    NASA Astrophysics Data System (ADS)

    Boening, C.; Schlegel, N.; Limonadi, D.; Schodlok, M.; Seroussi, H. L.; Larour, E. Y.; Watkins, M. M.

    2017-12-01

    In order to better quantify uncertainties in global mean sea level rise projections and in particular upper bounds, we aim at systematically evaluating the contributions from ice sheets and potential for extreme sea level rise due to sudden ice mass loss. Here, we take advantage of established uncertainty quantification tools embedded within the Ice Sheet System Model (ISSM) as well as sensitivities to ice/ocean interactions using melt rates and melt potential derived from MITgcm/ECCO2. With the use of these tools, we conduct Monte-Carlo style sampling experiments on forward simulations of the Antarctic ice sheet, by varying internal parameters and boundary conditions of the system over both extreme and credible worst-case ranges. Uncertainty bounds for climate forcing are informed by CMIP5 ensemble precipitation and ice melt estimates for year 2100, and uncertainty bounds for ocean melt rates are derived from a suite of regional sensitivity experiments using MITgcm. Resulting statistics allow us to assess how regional uncertainty in various parameters affect model estimates of century-scale sea level rise projections. The results inform efforts to a) isolate the processes and inputs that are most responsible for determining ice sheet contribution to sea level; b) redefine uncertainty brackets for century-scale projections; and c) provide a prioritized list of measurements, along with quantitative information on spatial and temporal resolution, required for reducing uncertainty in future sea level rise projections. Results indicate that ice sheet mass loss is dependent on the spatial resolution of key boundary conditions - such as bedrock topography and melt rates at the ice-ocean interface. This work is performed at and supported by the California Institute of Technology's Jet Propulsion Laboratory. Supercomputing time is also supported through a contract with the National Aeronautics and Space Administration's Cryosphere program.

  8. Bayesian forecasting and uncertainty quantifying of stream flows using Metropolis–Hastings Markov Chain Monte Carlo algorithm

    DOE PAGES

    Wang, Hongrui; Wang, Cheng; Wang, Ying; ...

    2017-04-05

    This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLEmore » confidence interval and thus more precise estimation by using the related information from regional gage stations. As a result, the Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.« less

  9. Maximum Entropy/Optimal Projection (MEOP) control design synthesis: Optimal quantification of the major design tradeoffs

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.; Bernstein, D. S.

    1987-01-01

    The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modeling and reduced control design methodology for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed. The application of the methodology to several Large Space Structures (LSS) problems of representative complexity is illustrated.

  10. Goal-Oriented Probability Density Function Methods for Uncertainty Quantification

    DTIC Science & Technology

    2015-12-11

    approximations or data-driven approaches. We investigated the accuracy of analytical tech- niques based Kubo -Van Kampen operator cumulant expansions for...analytical techniques based Kubo -Van Kampen operator cumulant expansions for Langevin equations driven by fractional Brownian motion and other noises

  11. 3.8 Proposed approach to uncertainty quantification and sensitivity analysis in the next PA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flach, Greg; Wohlwend, Jen

    2017-10-02

    This memorandum builds upon Section 3.8 of SRNL (2016) and Flach (2017) by defining key error analysis, uncertainty quantification, and sensitivity analysis concepts and terms, in preparation for the next E-Area Performance Assessment (WSRC 2008) revision.

  12. A multi-fidelity analysis selection method using a constrained discrete optimization formulation

    NASA Astrophysics Data System (ADS)

    Stults, Ian C.

    The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method which will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results. In many conceptual design programs, only limited data is available for quantifying model uncertainty. Because of this data sparsity, traditional probabilistic means for quantifying uncertainty should be reconsidered. This research proposes to instead quantify model uncertainty using an evidence theory formulation (also referred to as Dempster-Shafer theory) in lieu of the traditional probabilistic approach. Specific weaknesses in using evidence theory for quantifying model uncertainty are identified and addressed for the purposes of the Fidelity Selection Problem. A series of experiments was conducted to address these weaknesses using n-dimensional optimization test functions. These experiments found that model uncertainty present in analyses with 4 or fewer input variables could be effectively quantified using a strategic distribution creation method; if more than 4 input variables exist, a Frontier Finding Particle Swarm Optimization should instead be used. Once model uncertainty in contributing analysis code choices has been quantified, a selection method is required to determine which of these choices should be used in simulations. Because much of the selection done for engineering problems is driven by the physics of the problem, these are poor candidate problems for testing the true fitness of a candidate selection method. Specifically moderate and high dimensional problems' variability can often be reduced to only a few dimensions and scalability often cannot be easily addressed. For these reasons a simple academic function was created for the uncertainty quantification, and a canonical form of the Fidelity Selection Problem (FSP) was created. Fifteen best- and worst-case scenarios were identified in an effort to challenge the candidate selection methods both with respect to the characteristics of the tradeoff between time cost and model uncertainty and with respect to the stringency of the constraints and problem dimensionality. The results from this experiment show that a Genetic Algorithm (GA) was able to consistently find the correct answer, but under certain circumstances, a discrete form of Particle Swarm Optimization (PSO) was able to find the correct answer more quickly. To better illustrate how the uncertainty quantification and discrete optimization might be conducted for a "real world" problem, an illustrative example was conducted using gas turbine engines.

  13. Study the effect of reservoir spatial heterogeneity on CO2 sequestration under an uncertainty quantification (UQ) software framework

    NASA Astrophysics Data System (ADS)

    Fang, Y.; Hou, J.; Engel, D.; Lin, G.; Yin, J.; Han, B.; Fang, Z.; Fountoulakis, V.

    2011-12-01

    In this study, we introduce an uncertainty quantification (UQ) software framework for carbon sequestration, with the focus of studying being the effect of spatial heterogeneity of reservoir properties on CO2 migration. We use a sequential Gaussian method (SGSIM) to generate realizations of permeability fields with various spatial statistical attributes. To deal with the computational difficulties, we integrate the following ideas/approaches: 1) firstly, we use three different sampling approaches (probabilistic collocation, quasi-Monte Carlo, and adaptive sampling approaches) to reduce the required forward calculations while trying to explore the parameter space and quantify the input uncertainty; 2) secondly, we use eSTOMP as the forward modeling simulator. eSTOMP is implemented using the Global Arrays toolkit (GA) that is based on one-sided inter-processor communication and supports a shared memory programming style on distributed memory platforms. It provides highly-scalable performance. It uses a data model to partition most of the large scale data structures into a relatively small number of distinct classes. The lower level simulator infrastructure (e.g. meshing support, associated data structures, and data mapping to processors) is separated from the higher level physics and chemistry algorithmic routines using a grid component interface; and 3) besides the faster model and more efficient algorithms to speed up the forward calculation, we built an adaptive system infrastructure to select the best possible data transfer mechanisms, to optimally allocate system resources to improve performance, and to integrate software packages and data for composing carbon sequestration simulation, computation, analysis, estimation and visualization. We will demonstrate the framework with a given CO2 injection scenario in a heterogeneous sandstone reservoir.

  14. Developing Hydrogeological Site Characterization Strategies based on Human Health Risk

    NASA Astrophysics Data System (ADS)

    de Barros, F.; Rubin, Y.; Maxwell, R. M.

    2013-12-01

    In order to provide better sustainable groundwater quality management and minimize the impact of contamination in humans, improved understanding and quantification of the interaction between hydrogeological models, geological site information and human health are needed. Considering the joint influence of these components in the overall human health risk assessment and the corresponding sources of uncertainty aid decision makers to better allocate resources in data acquisition campaigns. This is important to (1) achieve remediation goals in a cost-effective manner, (2) protect human health and (3) keep water supplies clean in order to keep with quality standards. Such task is challenging since a full characterization of the subsurface is unfeasible due to financial and technological constraints. In addition, human exposure and physiological response to contamination are subject to uncertainty and variability. Normally, sampling strategies are developed with the goal of reducing uncertainty, but less often they are developed in the context of their impacts on the overall system uncertainty. Therefore, quantifying the impact from each of these components (hydrogeological, behavioral and physiological) in final human health risk prediction can provide guidance for decision makers to best allocate resources towards minimal prediction uncertainty. In this presentation, a multi-component human health risk-based framework is presented which allows decision makers to set priorities through an information entropy-based visualization tool. Results highlight the role of characteristic length-scales characterizing flow and transport in determining data needs within an integrated hydrogeological-health framework. Conditions where uncertainty reduction in human health risk predictions may benefit from better understanding of the health component, as opposed to a more detailed hydrogeological characterization, are also discussed. Finally, results illustrate how different dose-response models can impact the probability of human health risk exceeding a regulatory threshold.

  15. Bridging groundwater models and decision support with a Bayesian network

    USGS Publications Warehouse

    Fienen, Michael N.; Masterson, John P.; Plant, Nathaniel G.; Gutierrez, Benjamin T.; Thieler, E. Robert

    2013-01-01

    Resource managers need to make decisions to plan for future environmental conditions, particularly sea level rise, in the face of substantial uncertainty. Many interacting processes factor in to the decisions they face. Advances in process models and the quantification of uncertainty have made models a valuable tool for this purpose. Long-simulation runtimes and, often, numerical instability make linking process models impractical in many cases. A method for emulating the important connections between model input and forecasts, while propagating uncertainty, has the potential to provide a bridge between complicated numerical process models and the efficiency and stability needed for decision making. We explore this using a Bayesian network (BN) to emulate a groundwater flow model. We expand on previous approaches to validating a BN by calculating forecasting skill using cross validation of a groundwater model of Assateague Island in Virginia and Maryland, USA. This BN emulation was shown to capture the important groundwater-flow characteristics and uncertainty of the groundwater system because of its connection to island morphology and sea level. Forecast power metrics associated with the validation of multiple alternative BN designs guided the selection of an optimal level of BN complexity. Assateague island is an ideal test case for exploring a forecasting tool based on current conditions because the unique hydrogeomorphological variability of the island includes a range of settings indicative of past, current, and future conditions. The resulting BN is a valuable tool for exploring the response of groundwater conditions to sea level rise in decision support.

  16. Uncertainty quantification in fission cross section measurements at LANSCE

    DOE PAGES

    Tovesson, F.

    2015-01-09

    Neutron-induced fission cross sections have been measured for several isotopes of uranium and plutonium at the Los Alamos Neutron Science Center (LANSCE) over a wide range of incident neutron energies. The total uncertainties in these measurements are in the range 3–5% above 100 keV of incident neutron energy, which results from uncertainties in the target, neutron source, and detector system. The individual sources of uncertainties are assumed to be uncorrelated, however correlation in the cross section across neutron energy bins are considered. The quantification of the uncertainty contributions will be described here.

  17. Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biros, George

    Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less

  18. Overview of the AVT-191 Project to Assess Sensitivity Analysis and Uncertainty Quantification Methods for Military Vehicle Design

    NASA Technical Reports Server (NTRS)

    Benek, John A.; Luckring, James M.

    2017-01-01

    A NATO symposium held in 2008 identified many promising sensitivity analysis and un-certainty quantification technologies, but the maturity and suitability of these methods for realistic applications was not known. The STO Task Group AVT-191 was established to evaluate the maturity and suitability of various sensitivity analysis and uncertainty quantification methods for application to realistic problems of interest to NATO. The program ran from 2011 to 2015, and the work was organized into four discipline-centric teams: external aerodynamics, internal aerodynamics, aeroelasticity, and hydrodynamics. This paper presents an overview of the AVT-191 program content.

  19. Summary Findings from the AVT-191 Project to Assess Sensitivity Analysis and Uncertainty Quantification Methods for Military Vehicle Design

    NASA Technical Reports Server (NTRS)

    Benek, John A.; Luckring, James M.

    2017-01-01

    A NATO symposium held in Greece in 2008 identified many promising sensitivity analysis and uncertainty quantification technologies, but the maturity and suitability of these methods for realistic applications was not clear. The NATO Science and Technology Organization, Task Group AVT-191 was established to evaluate the maturity and suitability of various sensitivity analysis and uncertainty quantification methods for application to realistic vehicle development problems. The program ran from 2011 to 2015, and the work was organized into four discipline-centric teams: external aerodynamics, internal aerodynamics, aeroelasticity, and hydrodynamics. This paper summarizes findings and lessons learned from the task group.

  20. Quantifying errors without random sampling.

    PubMed

    Phillips, Carl V; LaPole, Luwanna M

    2003-06-12

    All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.

  1. Radiofrequency Electromagnetic Radiation and Memory Performance: Sources of Uncertainty in Epidemiological Cohort Studies

    PubMed Central

    Zeleke, Berihun M.; Abramson, Michael J.; Benke, Geza

    2018-01-01

    Uncertainty in experimental studies of exposure to radiation from mobile phones has in the past only been framed within the context of statistical variability. It is now becoming more apparent to researchers that epistemic or reducible uncertainties can also affect the total error in results. These uncertainties are derived from a wide range of sources including human error, such as data transcription, model structure, measurement and linguistic errors in communication. The issue of epistemic uncertainty is reviewed and interpreted in the context of the MoRPhEUS, ExPOSURE and HERMES cohort studies which investigate the effect of radiofrequency electromagnetic radiation from mobile phones on memory performance. Research into this field has found inconsistent results due to limitations from a range of epistemic sources. Potential analytic approaches are suggested based on quantification of epistemic error using Monte Carlo simulation. It is recommended that future studies investigating the relationship between radiofrequency electromagnetic radiation and memory performance pay more attention to treatment of epistemic uncertainties as well as further research into improving exposure assessment. Use of directed acyclic graphs is also encouraged to display the assumed covariate relationship. PMID:29587425

  2. A Practical, Robust Methodology for Acquiring New Observation Data Using Computationally Expensive Groundwater Models

    NASA Astrophysics Data System (ADS)

    Siade, Adam J.; Hall, Joel; Karelse, Robert N.

    2017-11-01

    Regional groundwater flow models play an important role in decision making regarding water resources; however, the uncertainty embedded in model parameters and model assumptions can significantly hinder the reliability of model predictions. One way to reduce this uncertainty is to collect new observation data from the field. However, determining where and when to obtain such data is not straightforward. There exist a number of data-worth and experimental design strategies developed for this purpose. However, these studies often ignore issues related to real-world groundwater models such as computational expense, existing observation data, high-parameter dimension, etc. In this study, we propose a methodology, based on existing methods and software, to efficiently conduct such analyses for large-scale, complex regional groundwater flow systems for which there is a wealth of available observation data. The method utilizes the well-established d-optimality criterion, and the minimax criterion for robust sampling strategies. The so-called Null-Space Monte Carlo method is used to reduce the computational burden associated with uncertainty quantification. And, a heuristic methodology, based on the concept of the greedy algorithm, is proposed for developing robust designs with subsets of the posterior parameter samples. The proposed methodology is tested on a synthetic regional groundwater model, and subsequently applied to an existing, complex, regional groundwater system in the Perth region of Western Australia. The results indicate that robust designs can be obtained efficiently, within reasonable computational resources, for making regional decisions regarding groundwater level sampling.

  3. Sparse Polynomial Chaos Surrogate for ACME Land Model via Iterative Bayesian Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.

    2015-12-01

    For computationally expensive climate models, Monte-Carlo approaches of exploring the input parameter space are often prohibitive due to slow convergence with respect to ensemble size. To alleviate this, we build inexpensive surrogates using uncertainty quantification (UQ) methods employing Polynomial Chaos (PC) expansions that approximate the input-output relationships using as few model evaluations as possible. However, when many uncertain input parameters are present, such UQ studies suffer from the curse of dimensionality. In particular, for 50-100 input parameters non-adaptive PC representations have infeasible numbers of basis terms. To this end, we develop and employ Weighted Iterative Bayesian Compressive Sensing to learn the most important input parameter relationships for efficient, sparse PC surrogate construction with posterior uncertainty quantified due to insufficient data. Besides drastic dimensionality reduction, the uncertain surrogate can efficiently replace the model in computationally intensive studies such as forward uncertainty propagation and variance-based sensitivity analysis, as well as design optimization and parameter estimation using observational data. We applied the surrogate construction and variance-based uncertainty decomposition to Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  4. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less

  5. Uncertainty quantification and experimental design based on unsupervised machine learning identification of contaminant sources and groundwater types using hydrogeochemical data

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.

    2017-12-01

    Identification of the original groundwater types present in geochemical mixtures observed in an aquifer is a challenging but very important task. Frequently, some of the groundwater types are related to different infiltration and/or contamination sources associated with various geochemical signatures and origins. The characterization of groundwater mixing processes typically requires solving complex inverse models representing groundwater flow and geochemical transport in the aquifer, where the inverse analysis accounts for available site data. Usually, the model is calibrated against the available data characterizing the spatial and temporal distribution of the observed geochemical species. Numerous geochemical constituents and processes may need to be simulated in these models which further complicates the analyses. As a result, these types of model analyses are typically extremely challenging. Here, we demonstrate a new contaminant source identification approach that performs decomposition of the observation mixtures based on Nonnegative Matrix Factorization (NMF) method for Blind Source Separation (BSS), coupled with a custom semi-supervised clustering algorithm. Our methodology, called NMFk, is capable of identifying (a) the number of groundwater types and (b) the original geochemical concentration of the contaminant sources from measured geochemical mixtures with unknown mixing ratios without any additional site information. We also demonstrate how NMFk can be extended to perform uncertainty quantification and experimental design related to real-world site characterization. The NMFk algorithm works with geochemical data represented in the form of concentrations, ratios (of two constituents; for example, isotope ratios), and delta notations (standard normalized stable isotope ratios). The NMFk algorithm has been extensively tested on synthetic datasets; NMFk analyses have been actively performed on real-world data collected at the Los Alamos National Laboratory (LANL) groundwater sites related to Chromium and RDX contamination.

  6. Reducing the uncertainty in the fidelity of seismic imaging results

    NASA Astrophysics Data System (ADS)

    Zhou, H. W.; Zou, Z.

    2017-12-01

    A key aspect in geoscientific inversion is quantifying the quality of the results. In seismic imaging, we must quantify the uncertainty of every imaging result based on field data, because data noise and methodology limitations may produce artifacts. Detection of artifacts is therefore an important aspect in uncertainty quantification in geoscientific inversion. Quantifying the uncertainty of seismic imaging solutions means assessing their fidelity, which defines the truthfulness of the imaged targets in terms of their resolution, position error and artifact. Key challenges to achieving the fidelity of seismic imaging include: (1) Difficulty to tell signal from artifact and noise; (2) Limitations in signal-to-noise ratio and seismic illumination; and (3) The multi-scale nature of the data space and model space. Most seismic imaging studies of the Earth's crust and mantle have employed inversion or modeling approaches. Though they are in opposite directions of mapping between the data space and model space, both inversion and modeling seek the best model to minimize the misfit in the data space, which unfortunately is not the output space. The fact that the selection and uncertainty of the output model are not judged in the output space has exacerbated the nonuniqueness problem for inversion and modeling. In contrast, the practice in exploration seismology has long established a two-fold approach of seismic imaging: Using velocity modeling building to establish the long-wavelength reference velocity models, and using seismic migration to map the short-wavelength reflectivity structures. Most interestingly, seismic migration maps the data into an output space called imaging space, where the output reflection images of the subsurface are formed based on an imaging condition. A good example is the reverse time migration, which seeks the reflectivity image as the best fit in the image space between the extrapolation of time-reversed waveform data and the prediction based on estimated velocity model and source parameters. I will illustrate the benefits of deciding the best output result in the output space for inversion, using examples from seismic imaging.

  7. Uncertainty Quantification in Simulations of Epidemics Using Polynomial Chaos

    PubMed Central

    Santonja, F.; Chen-Charpentier, B.

    2012-01-01

    Mathematical models based on ordinary differential equations are a useful tool to study the processes involved in epidemiology. Many models consider that the parameters are deterministic variables. But in practice, the transmission parameters present large variability and it is not possible to determine them exactly, and it is necessary to introduce randomness. In this paper, we present an application of the polynomial chaos approach to epidemiological mathematical models based on ordinary differential equations with random coefficients. Taking into account the variability of the transmission parameters of the model, this approach allows us to obtain an auxiliary system of differential equations, which is then integrated numerically to obtain the first-and the second-order moments of the output stochastic processes. A sensitivity analysis based on the polynomial chaos approach is also performed to determine which parameters have the greatest influence on the results. As an example, we will apply the approach to an obesity epidemic model. PMID:22927889

  8. Ensembles modeling approach to study Climate Change impacts on Wheat

    NASA Astrophysics Data System (ADS)

    Ahmed, Mukhtar; Claudio, Stöckle O.; Nelson, Roger; Higgins, Stewart

    2017-04-01

    Simulations of crop yield under climate variability are subject to uncertainties, and quantification of such uncertainties is essential for effective use of projected results in adaptation and mitigation strategies. In this study we evaluated the uncertainties related to crop-climate models using five crop growth simulation models (CropSyst, APSIM, DSSAT, STICS and EPIC) and 14 general circulation models (GCMs) for 2 representative concentration pathways (RCP) of atmospheric CO2 (4.5 and 8.5 W m-2) in the Pacific Northwest (PNW), USA. The aim was to assess how different process-based crop models could be used accurately for estimation of winter wheat growth, development and yield. Firstly, all models were calibrated for high rainfall, medium rainfall, low rainfall and irrigated sites in the PNW using 1979-2010 as the baseline period. Response variables were related to farm management and soil properties, and included crop phenology, leaf area index (LAI), biomass and grain yield of winter wheat. All five models were run from 2000 to 2100 using the 14 GCMs and 2 RCPs to evaluate the effect of future climate (rainfall, temperature and CO2) on winter wheat phenology, LAI, biomass, grain yield and harvest index. Simulated time to flowering and maturity was reduced in all models except EPIC with some level of uncertainty. All models generally predicted an increase in biomass and grain yield under elevated CO2 but this effect was more prominent under rainfed conditions than irrigation. However, there was uncertainty in the simulation of crop phenology, biomass and grain yield under 14 GCMs during three prediction periods (2030, 2050 and 2070). We concluded that to improve accuracy and consistency in simulating wheat growth dynamics and yield under a changing climate, a multimodel ensemble approach should be used.

  9. Advances and Challenges In Uncertainty Quantification with Application to Climate Prediction, ICF design and Science Stockpile Stewardship

    NASA Astrophysics Data System (ADS)

    Klein, R.; Woodward, C. S.; Johannesson, G.; Domyancic, D.; Covey, C. C.; Lucas, D. D.

    2012-12-01

    Uncertainty Quantification (UQ) is a critical field within 21st century simulation science that resides at the very center of the web of emerging predictive capabilities. The science of UQ holds the promise of giving much greater meaning to the results of complex large-scale simulations, allowing for quantifying and bounding uncertainties. This powerful capability will yield new insights into scientific predictions (e.g. Climate) of great impact on both national and international arenas, allow informed decisions on the design of critical experiments (e.g. ICF capsule design, MFE, NE) in many scientific fields, and assign confidence bounds to scientifically predictable outcomes (e.g. nuclear weapons design). In this talk I will discuss a major new strategic initiative (SI) we have developed at Lawrence Livermore National Laboratory to advance the science of Uncertainty Quantification at LLNL focusing in particular on (a) the research and development of new algorithms and methodologies of UQ as applied to multi-physics multi-scale codes, (b) incorporation of these advancements into a global UQ Pipeline (i.e. a computational superstructure) that will simplify user access to sophisticated tools for UQ studies as well as act as a self-guided, self-adapting UQ engine for UQ studies on extreme computing platforms and (c) use laboratory applications as a test bed for new algorithms and methodologies. The initial SI focus has been on applications for the quantification of uncertainty associated with Climate prediction, but the validated UQ methodologies we have developed are now being fed back into Science Based Stockpile Stewardship (SSS) and ICF UQ efforts. To make advancements in several of these UQ grand challenges, I will focus in talk on the following three research areas in our Strategic Initiative: Error Estimation in multi-physics and multi-scale codes ; Tackling the "Curse of High Dimensionality"; and development of an advanced UQ Computational Pipeline to enable complete UQ workflow and analysis for ensemble runs at the extreme scale (e.g. exascale) with self-guiding adaptation in the UQ Pipeline engine. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).

  10. Quantitative diagnosis and prognosis framework for concrete degradation due to alkali-silica reaction

    NASA Astrophysics Data System (ADS)

    Mahadevan, Sankaran; Neal, Kyle; Nath, Paromita; Bao, Yanqing; Cai, Guowei; Orme, Peter; Adams, Douglas; Agarwal, Vivek

    2017-02-01

    This research is seeking to develop a probabilistic framework for health diagnosis and prognosis of aging concrete structures in nuclear power plants that are subjected to physical, chemical, environment, and mechanical degradation. The proposed framework consists of four elements: monitoring, data analytics, uncertainty quantification, and prognosis. The current work focuses on degradation caused by ASR (alkali-silica reaction). Controlled concrete specimens with reactive aggregate are prepared to develop accelerated ASR degradation. Different monitoring techniques — infrared thermography, digital image correlation (DIC), mechanical deformation measurements, nonlinear impact resonance acoustic spectroscopy (NIRAS), and vibro-acoustic modulation (VAM) — are studied for ASR diagnosis of the specimens. Both DIC and mechanical measurements record the specimen deformation caused by ASR gel expansion. Thermography is used to compare the thermal response of pristine and damaged concrete specimens and generate a 2-D map of the damage (i.e., ASR gel and cracked area), thus facilitating localization and quantification of damage. NIRAS and VAM are two separate vibration-based techniques that detect nonlinear changes in dynamic properties caused by the damage. The diagnosis results from multiple techniques are then fused using a Bayesian network, which also helps to quantify the uncertainty in the diagnosis. Prognosis of ASR degradation is then performed based on the current state of degradation obtained from diagnosis, by using a coupled thermo-hydro-mechanical-chemical (THMC) model for ASR degradation. This comprehensive approach of monitoring, data analytics, and uncertainty-quantified diagnosis and prognosis will facilitate the development of a quantitative, risk informed framework that will support continuous assessment and risk management of structural health and performance.

  11. Uncertainty in urban flood damage assessment due to urban drainage modelling and depth-damage curve estimation.

    PubMed

    Freni, G; La Loggia, G; Notaro, V

    2010-01-01

    Due to the increased occurrence of flooding events in urban areas, many procedures for flood damage quantification have been defined in recent decades. The lack of large databases in most cases is overcome by combining the output of urban drainage models and damage curves linking flooding to expected damage. The application of advanced hydraulic models as diagnostic, design and decision-making support tools has become a standard practice in hydraulic research and application. Flooding damage functions are usually evaluated by a priori estimation of potential damage (based on the value of exposed goods) or by interpolating real damage data (recorded during historical flooding events). Hydraulic models have undergone continuous advancements, pushed forward by increasing computer capacity. The details of the flooding propagation process on the surface and the details of the interconnections between underground and surface drainage systems have been studied extensively in recent years, resulting in progressively more reliable models. The same level of was advancement has not been reached with regard to damage curves, for which improvements are highly connected to data availability; this remains the main bottleneck in the expected flooding damage estimation. Such functions are usually affected by significant uncertainty intrinsically related to the collected data and to the simplified structure of the adopted functional relationships. The present paper aimed to evaluate this uncertainty by comparing the intrinsic uncertainty connected to the construction of the damage-depth function to the hydraulic model uncertainty. In this way, the paper sought to evaluate the role of hydraulic model detail level in the wider context of flood damage estimation. This paper demonstrated that the use of detailed hydraulic models might not be justified because of the higher computational cost and the significant uncertainty in damage estimation curves. This uncertainty occurs mainly because a large part of the total uncertainty is dependent on depth-damage curves. Improving the estimation of these curves may provide better results in term of uncertainty reduction than the adoption of detailed hydraulic models.

  12. A Cascade Approach to Uncertainty Estimation for the Hydrological Simulation of Droughts

    NASA Astrophysics Data System (ADS)

    Smith, Katie; Tanguy, Maliko; Parry, Simon; Prudhomme, Christel

    2016-04-01

    Uncertainty poses a significant challenge in environmental research and the characterisation and quantification of uncertainty has become a research priority over the past decade. Studies of extreme events are particularly affected by issues of uncertainty. This study focusses on the sources of uncertainty in the modelling of streamflow droughts in the United Kingdom. Droughts are a poorly understood natural hazard with no universally accepted definition. Meteorological, hydrological and agricultural droughts have different meanings and vary both spatially and temporally, yet each is inextricably linked. The work presented here is part of two extensive interdisciplinary projects investigating drought reconstruction and drought forecasting capabilities in the UK. Lumped catchment models are applied to simulate streamflow drought, and uncertainties from 5 different sources are investigated: climate input data, potential evapotranspiration (PET) method, hydrological model, within model structure, and model parameterisation. Latin Hypercube sampling is applied to develop large parameter ensembles for each model structure which are run using parallel computing on a high performance computer cluster. Parameterisations are assessed using a multi-objective evaluation criteria which includes both general and drought performance metrics. The effect of different climate input data and PET methods on model output is then considered using the accepted model parameterisations. The uncertainty from each of the sources creates a cascade, and when presented as such the relative importance of each aspect of uncertainty can be determined.

  13. A computationally inexpensive model for estimating dimensional measurement uncertainty due to x-ray computed tomography instrument misalignments

    NASA Astrophysics Data System (ADS)

    Ametova, Evelina; Ferrucci, Massimiliano; Chilingaryan, Suren; Dewulf, Wim

    2018-06-01

    The recent emergence of advanced manufacturing techniques such as additive manufacturing and an increased demand on the integrity of components have motivated research on the application of x-ray computed tomography (CT) for dimensional quality control. While CT has shown significant empirical potential for this purpose, there is a need for metrological research to accelerate the acceptance of CT as a measuring instrument. The accuracy in CT-based measurements is vulnerable to the instrument geometrical configuration during data acquisition, namely the relative position and orientation of x-ray source, rotation stage, and detector. Consistency between the actual instrument geometry and the corresponding parameters used in the reconstruction algorithm is critical. Currently available procedures provide users with only estimates of geometrical parameters. Quantification and propagation of uncertainty in the measured geometrical parameters must be considered to provide a complete uncertainty analysis and to establish confidence intervals for CT dimensional measurements. In this paper, we propose a computationally inexpensive model to approximate the influence of errors in CT geometrical parameters on dimensional measurement results. We use surface points extracted from a computer-aided design (CAD) model to model discrepancies in the radiographic image coordinates assigned to the projected edges between an aligned system and a system with misalignments. The efficacy of the proposed method was confirmed on simulated and experimental data in the presence of various geometrical uncertainty contributors.

  14. Validating the applicability of the GUM procedure

    NASA Astrophysics Data System (ADS)

    Cox, Maurice G.; Harris, Peter M.

    2014-08-01

    This paper is directed at practitioners seeking a degree of assurance in the quality of the results of an uncertainty evaluation when using the procedure in the Guide to the Expression of Uncertainty in Measurement (GUM) (JCGM 100 : 2008). Such assurance is required in adhering to general standards such as International Standard ISO/IEC 17025 or other sector-specific standards. We investigate the extent to which such assurance can be given. For many practical cases, a measurement result incorporating an evaluated uncertainty that is correct to one significant decimal digit would be acceptable. Any quantification of the numerical precision of an uncertainty statement is naturally relative to the adequacy of the measurement model and the knowledge used of the quantities in that model. For general univariate and multivariate measurement models, we emphasize the use of a Monte Carlo method, as recommended in GUM Supplements 1 and 2. One use of this method is as a benchmark in terms of which measurement results provided by the GUM can be assessed in any particular instance. We mainly consider measurement models that are linear in the input quantities, or have been linearized and the linearization process is deemed to be adequate. When the probability distributions for those quantities are independent, we indicate the use of other approaches such as convolution methods based on the fast Fourier transform and, particularly, Chebyshev polynomials as benchmarks.

  15. Quantifying Risks and Uncertainties Associated with Induced Seismicity due to CO2 Injection into Geologic Formations with Faults

    NASA Astrophysics Data System (ADS)

    Hou, Z.; Nguyen, B. N.; Bacon, D. H.; White, M. D.; Murray, C. J.

    2016-12-01

    A multiphase flow and reactive transport simulator named STOMP-CO2-R has been developed and coupled to the ABAQUS® finite element package for geomechanical analysis enabling comprehensive thermo-hydro-geochemical-mechanical (THMC) analyses. The coupled THMC simulator has been applied to analyze faulted CO2 reservoir responses (e.g., stress and strain distributions, pressure buildup, slip tendency factor, pressure margin to fracture) with various complexities in fault and reservoir structures and mineralogy. Depending on the geological and reaction network settings, long-term injection of CO2 can have a significant effect on the elastic stiffness and permeability of formation rocks. In parallel, an uncertainty quantification framework (UQ-CO2), which consists of entropy-based prior uncertainty representation, efficient sampling, geostatistical reservoir modeling, and effective response surface analysis, has been developed for quantifying risks and uncertainties associated with CO2 sequestration. It has been demonstrated for evaluating risks in CO2 leakage through natural pathways and wellbores, and for developing predictive reduced order models. Recently, a parallel STOMP-CO2-R has been developed and the updated STOMP/ABAQUS model has been proven to have a great scalability, which makes it possible to integrate the model with the UQ framework to effectively and efficiently explore multidimensional parameter space (e.g., permeability, elastic modulus, crack orientation, fault friction coefficient) for a more systematic analysis of induced seismicity risks.

  16. Three Dimensional Vapor Intrusion Modeling: Model Validation and Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Akbariyeh, S.; Patterson, B.; Rakoczy, A.; Li, Y.

    2013-12-01

    Volatile organic chemicals (VOCs), such as chlorinated solvents and petroleum hydrocarbons, are prevalent groundwater contaminants due to their improper disposal and accidental spillage. In addition to contaminating groundwater, VOCs may partition into the overlying vadose zone and enter buildings through gaps and cracks in foundation slabs or basement walls, a process termed vapor intrusion. Vapor intrusion of VOCs has been recognized as a detrimental source for human exposures to potential carcinogenic or toxic compounds. The simulation of vapor intrusion from a subsurface source has been the focus of many studies to better understand the process and guide field investigation. While multiple analytical and numerical models were developed to simulate the vapor intrusion process, detailed validation of these models against well controlled experiments is still lacking, due to the complexity and uncertainties associated with site characterization and soil gas flux and indoor air concentration measurement. In this work, we present an effort to validate a three-dimensional vapor intrusion model based on a well-controlled experimental quantification of the vapor intrusion pathways into a slab-on-ground building under varying environmental conditions. Finally, a probabilistic approach based on Monte Carlo simulations is implemented to determine the probability distribution of indoor air concentration based on the most uncertain input parameters.

  17. Forensic Uncertainty Quantification of Explosive Dispersal of Particles

    NASA Astrophysics Data System (ADS)

    Hughes, Kyle; Park, Chanyoung; Haftka, Raphael; Kim, Nam-Ho

    2017-06-01

    In addition to the numerical challenges of simulating the explosive dispersal of particles, validation of the simulation is often plagued with poor knowledge of the experimental conditions. The level of experimental detail required for validation is beyond what is usually included in the literature. This presentation proposes the use of forensic uncertainty quantification (UQ) to investigate validation-quality experiments to discover possible sources of uncertainty that may have been missed in initial design of experiments or under-reported. The current experience of the authors has found that by making an analogy to crime scene investigation when looking at validation experiments, valuable insights may be gained. One examines all the data and documentation provided by the validation experimentalists, corroborates evidence, and quantifies large sources of uncertainty a posteriori with empirical measurements. In addition, it is proposed that forensic UQ may benefit from an independent investigator to help remove possible implicit biases and increases the likelihood of discovering unrecognized uncertainty. Forensic UQ concepts will be discussed and then applied to a set of validation experiments performed at Eglin Air Force Base. This work was supported in part by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program.

  18. Advances in Parameter and Uncertainty Quantification Using Bayesian Hierarchical Techniques with a Spatially Referenced Watershed Model (Invited)

    NASA Astrophysics Data System (ADS)

    Alexander, R. B.; Boyer, E. W.; Schwarz, G. E.; Smith, R. A.

    2013-12-01

    Estimating water and material stores and fluxes in watershed studies is frequently complicated by uncertainties in quantifying hydrological and biogeochemical effects of factors such as land use, soils, and climate. Although these process-related effects are commonly measured and modeled in separate catchments, researchers are especially challenged by their complexity across catchments and diverse environmental settings, leading to a poor understanding of how model parameters and prediction uncertainties vary spatially. To address these concerns, we illustrate the use of Bayesian hierarchical modeling techniques with a dynamic version of the spatially referenced watershed model SPARROW (SPAtially Referenced Regression On Watershed attributes). The dynamic SPARROW model is designed to predict streamflow and other water cycle components (e.g., evapotranspiration, soil and groundwater storage) for monthly varying hydrological regimes, using mechanistic functions, mass conservation constraints, and statistically estimated parameters. In this application, the model domain includes nearly 30,000 NHD (National Hydrologic Data) stream reaches and their associated catchments in the Susquehanna River Basin. We report the results of our comparisons of alternative models of varying complexity, including models with different explanatory variables as well as hierarchical models that account for spatial and temporal variability in model parameters and variance (error) components. The model errors are evaluated for changes with season and catchment size and correlations in time and space. The hierarchical models consist of a two-tiered structure in which climate forcing parameters are modeled as random variables, conditioned on watershed properties. Quantification of spatial and temporal variations in the hydrological parameters and model uncertainties in this approach leads to more efficient (lower variance) and less biased model predictions throughout the river network. Moreover, predictions of water-balance components are reported according to probabilistic metrics (e.g., percentiles, prediction intervals) that include both parameter and model uncertainties. These improvements in predictions of streamflow dynamics can inform the development of more accurate predictions of spatial and temporal variations in biogeochemical stores and fluxes (e.g., nutrients and carbon) in watersheds.

  19. Reduced Order Modeling Methods for Turbomachinery Design

    DTIC Science & Technology

    2009-03-01

    and Ma- terials Conference, May 2006. [45] A. Gelman , J. B. Carlin, H. S. Stern, and D. B. Rubin, Bayesian Data Analysis. New York, NY: Chapman I& Hall...Macian- Juan , and R. Chawla, “A statistical methodology for quantif ca- tion of uncertainty in best estimate code physical models,” Annals of Nuclear En

  20. Proof-of-Concept Study for Uncertainty Quantification and Sensitivity Analysis using the BRL Shaped-Charge Example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Justin Matthew

    These are the slides for a graduate presentation at Mississippi State University. It covers the following: the BRL Shaped-Charge Geometry in PAGOSA, mesh refinement study, surrogate modeling using a radial basis function network (RBFN), ruling out parameters using sensitivity analysis (equation of state study), uncertainty quantification (UQ) methodology, and sensitivity analysis (SA) methodology. In summary, a mesh convergence study was used to ensure that solutions were numerically stable by comparing PDV data between simulations. A Design of Experiments (DOE) method was used to reduce the simulation space to study the effects of the Jones-Wilkins-Lee (JWL) Parameters for the Composition Bmore » main charge. Uncertainty was quantified by computing the 95% data range about the median of simulation output using a brute force Monte Carlo (MC) random sampling method. Parameter sensitivities were quantified using the Fourier Amplitude Sensitivity Test (FAST) spectral analysis method where it was determined that detonation velocity, initial density, C1, and B1 controlled jet tip velocity.« less

  1. Evaluation of Global Observations-Based Evapotranspiration Datasets and IPCC AR4 Simulations

    NASA Technical Reports Server (NTRS)

    Mueller, B.; Seneviratne, S. I.; Jimenez, C.; Corti, T.; Hirschi, M.; Balsamo, G.; Ciais, P.; Dirmeyer, P.; Fisher, J. B.; Guo, Z.; hide

    2011-01-01

    Quantification of global land evapotranspiration (ET) has long been associated with large uncertainties due to the lack of reference observations. Several recently developed products now provide the capacity to estimate ET at global scales. These products, partly based on observational data, include satellite ]based products, land surface model (LSM) simulations, atmospheric reanalysis output, estimates based on empirical upscaling of eddycovariance flux measurements, and atmospheric water balance datasets. The LandFlux-EVAL project aims to evaluate and compare these newly developed datasets. Additionally, an evaluation of IPCC AR4 global climate model (GCM) simulations is presented, providing an assessment of their capacity to reproduce flux behavior relative to the observations ]based products. Though differently constrained with observations, the analyzed reference datasets display similar large-scale ET patterns. ET from the IPCC AR4 simulations was significantly smaller than that from the other products for India (up to 1 mm/d) and parts of eastern South America, and larger in the western USA, Australia and China. The inter-product variance is lower across the IPCC AR4 simulations than across the reference datasets in several regions, which indicates that uncertainties may be underestimated in the IPCC AR4 models due to shared biases of these simulations.

  2. Nuclear Data Uncertainty Quantification: Past, Present and Future

    NASA Astrophysics Data System (ADS)

    Smith, D. L.

    2015-01-01

    An historical overview is provided of the mathematical foundations of uncertainty quantification and the roles played in the more recent past by nuclear data uncertainties in nuclear data evaluations and nuclear applications. Significant advances that have established the mathematical framework for contemporary nuclear data evaluation methods, as well as the use of uncertainty information in nuclear data evaluation and nuclear applications, are described. This is followed by a brief examination of the current status concerning nuclear data evaluation methodology, covariance data generation, and the application of evaluated nuclear data uncertainties in contemporary nuclear technology. A few possible areas for future investigation of this subject are also suggested.

  3. WE-H-207A-06: Hypoxia Quantification in Static PET Images: The Signal in the Noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, H; Yeung, I; Milosevic, M

    2016-06-15

    Purpose: Quantification of hypoxia from PET images is of considerable clinical interest. In the absence of dynamic PET imaging the hypoxic fraction (HF) of a tumor has to be estimated from voxel values of activity concentration of a radioactive hypoxia tracer. This work is part of an effort to standardize quantification of tumor hypoxic fraction from PET images. Methods: A simple hypoxia imaging model in the tumor was developed. The distribution of the tracer activity was described as the sum of two different probability distributions, one for the normoxic (and necrotic), the other for the hypoxic voxels. The widths ofmore » the distributions arise due to variability of the transport, tumor tissue inhomogeneity, tracer binding kinetics, and due to PET image noise. Quantification of HF was performed for various levels of variability using two different methodologies: a) classification thresholds between normoxic and hypoxic voxels based on a non-hypoxic surrogate (muscle), and b) estimation of the (posterior) probability distributions based on maximizing likelihood optimization that does not require a surrogate. Data from the hypoxia imaging model and from 27 cervical cancer patients enrolled in a FAZA PET study were analyzed. Results: In the model, where the true value of HF is known, thresholds usually underestimate the value for large variability. For the patients, a significant uncertainty of the HF values (an average intra-patient range of 17%) was caused by spatial non-uniformity of image noise which is a hallmark of all PET images. Maximum likelihood estimation (MLE) is able to directly optimize for the weights of both distributions, however, may suffer from poor optimization convergence. For some patients, MLE-based HF values showed significant differences to threshold-based HF-values. Conclusion: HF-values depend critically on the magnitude of the different sources of tracer uptake variability. A measure of confidence should also be reported.« less

  4. Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.

    2002-01-01

    Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.

  5. Quantifying uncertainty and computational complexity for pore-scale simulations

    NASA Astrophysics Data System (ADS)

    Chen, C.; Yuan, Z.; Wang, P.; Yang, X.; Zhenyan, L.

    2016-12-01

    Pore-scale simulation is an essential tool to understand the complex physical process in many environmental problems, from multi-phase flow in the subsurface to fuel cells. However, in practice, factors such as sample heterogeneity, data sparsity and in general, our insufficient knowledge of the underlying process, render many simulation parameters and hence the prediction results uncertain. Meanwhile, most pore-scale simulations (in particular, direct numerical simulation) incur high computational cost due to finely-resolved spatio-temporal scales, which further limits our data/samples collection. To address those challenges, we propose a novel framework based on the general polynomial chaos (gPC) and build a surrogate model representing the essential features of the underlying system. To be specific, we apply the novel framework to analyze the uncertainties of the system behavior based on a series of pore-scale numerical experiments, such as flow and reactive transport in 2D heterogeneous porous media and 3D packed beds. Comparing with recent pore-scale uncertainty quantification studies using Monte Carlo techniques, our new framework requires fewer number of realizations and hence considerably reduce the overall computational cost, while maintaining the desired accuracy.

  6. Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling.

    PubMed

    Dotto, Cintia B S; Mannina, Giorgio; Kleidorfer, Manfred; Vezzaro, Luca; Henrichs, Malte; McCarthy, David T; Freni, Gabriele; Rauch, Wolfgang; Deletic, Ana

    2012-05-15

    Urban drainage models are important tools used by both practitioners and scientists in the field of stormwater management. These models are often conceptual and usually require calibration using local datasets. The quantification of the uncertainty associated with the models is a must, although it is rarely practiced. The International Working Group on Data and Models, which works under the IWA/IAHR Joint Committee on Urban Drainage, has been working on the development of a framework for defining and assessing uncertainties in the field of urban drainage modelling. A part of that work is the assessment and comparison of different techniques generally used in the uncertainty assessment of the parameters of water models. This paper compares a number of these techniques: the Generalized Likelihood Uncertainty Estimation (GLUE), the Shuffled Complex Evolution Metropolis algorithm (SCEM-UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multi-objective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside the same dataset. The comparison results for a well-posed rainfall/runoff model showed that the four methods provide similar probability distributions of model parameters, and model prediction intervals. For ill-posed water quality model the differences between the results were much wider; and the paper provides the specific advantages and disadvantages of each method. In relation to computational efficiency (i.e. number of iterations required to generate the probability distribution of parameters), it was found that SCEM-UA and AMALGAM produce results quicker than GLUE in terms of required number of simulations. However, GLUE requires the lowest modelling skills and is easy to implement. All non-Bayesian methods have problems with the way they accept behavioural parameter sets, e.g. GLUE, SCEM-UA and AMALGAM have subjective acceptance thresholds, while MICA has usually problem with its hypothesis on normality of residuals. It is concluded that modellers should select the method which is most suitable for the system they are modelling (e.g. complexity of the model's structure including the number of parameters), their skill/knowledge level, the available information, and the purpose of their study. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Application of data fusion modeling (DFM) to site characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, D.W.; Gibbs, B.P.; Jones, W.F.

    1996-01-01

    Subsurface characterization is faced with substantial uncertainties because the earth is very heterogeneous, and typical data sets are fragmented and disparate. DFM removes many of the data limitations of current methods to quantify and reduce uncertainty for a variety of data types and models. DFM is a methodology to compute hydrogeological state estimates and their uncertainties from three sources of information: measured data, physical laws, and statistical models for spatial heterogeneities. The benefits of DFM are savings in time and cost through the following: the ability to update models in real time to help guide site assessment, improved quantification ofmore » uncertainty for risk assessment, and improved remedial design by quantifying the uncertainty in safety margins. A Bayesian inverse modeling approach is implemented with a Gauss Newton method where spatial heterogeneities are viewed as Markov random fields. Information from data, physical laws, and Markov models is combined in a Square Root Information Smoother (SRIS). Estimates and uncertainties can be computed for heterogeneous hydraulic conductivity fields in multiple geological layers from the usually sparse hydraulic conductivity data and the often more plentiful head data. An application of DFM to the Old Burial Ground at the DOE Savannah River Site will be presented. DFM estimates and quantifies uncertainty in hydrogeological parameters using variably saturated flow numerical modeling to constrain the estimation. Then uncertainties are propagated through the transport modeling to quantify the uncertainty in tritium breakthrough curves at compliance points.« less

  8. Application of data fusion modeling (DFM) to site characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, D.W.; Gibbs, B.P.; Jones, W.F.

    1996-12-31

    Subsurface characterization is faced with substantial uncertainties because the earth is very heterogeneous, and typical data sets are fragmented and disparate. DFM removes many of the data limitations of current methods to quantify and reduce uncertainty for a variety of data types and models. DFM is a methodology to compute hydrogeological state estimates and their uncertainties from three sources of information: measured data, physical laws, and statistical models for spatial heterogeneities. The benefits of DFM are savings in time and cost through the following: the ability to update models in real time to help guide site assessment, improved quantification ofmore » uncertainty for risk assessment, and improved remedial design by quantifying the uncertainty in safety margins. A Bayesian inverse modeling approach is implemented with a Gauss Newton method where spatial heterogeneities are viewed as Markov random fields. Information from data, physical laws, and Markov models is combined in a Square Root Information Smoother (SRIS). Estimates and uncertainties can be computed for heterogeneous hydraulic conductivity fields in multiple geological layers from the usually sparse hydraulic conductivity data and the often more plentiful head data. An application of DFM to the Old Burial Ground at the DOE Savannah River Site will be presented. DFM estimates and quantifies uncertainty in hydrogeological parameters using variably saturated flow numerical modeling to constrain the estimation. Then uncertainties are propagated through the transport modeling to quantify the uncertainty in tritium breakthrough curves at compliance points.« less

  9. Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS

    DOE PAGES

    Brown, C. S.; Zhang, Hongbin

    2016-05-24

    Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less

  10. VERAIn

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, Srdjan

    2015-02-16

    CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less

  11. Combining Relevance Vector Machines and exponential regression for bearing residual life estimation

    NASA Astrophysics Data System (ADS)

    Di Maio, Francesco; Tsui, Kwok Leung; Zio, Enrico

    2012-08-01

    In this paper we present a new procedure for estimating the bearing Residual Useful Life (RUL) by combining data-driven and model-based techniques. Respectively, we resort to (i) Relevance Vector Machines (RVMs) for selecting a low number of significant basis functions, called Relevant Vectors (RVs), and (ii) exponential regression to compute and continuously update residual life estimations. The combination of these techniques is developed with reference to partially degraded thrust ball bearings and tested on real world vibration-based degradation data. On the case study considered, the proposed procedure outperforms other model-based methods, with the added value of an adequate representation of the uncertainty associated to the estimates of the quantification of the credibility of the results by the Prognostic Horizon (PH) metric.

  12. Quantifying the uncertainty in discharge data using hydraulic knowledge and uncertain gaugings: a Bayesian method named BaRatin

    NASA Astrophysics Data System (ADS)

    Le Coz, Jérôme; Renard, Benjamin; Bonnifait, Laurent; Branger, Flora; Le Boursicaud, Raphaël; Horner, Ivan; Mansanarez, Valentin; Lang, Michel; Vigneau, Sylvain

    2015-04-01

    River discharge is a crucial variable for Hydrology: as the output variable of most hydrologic models, it is used for sensitivity analyses, model structure identification, parameter estimation, data assimilation, prediction, etc. A major difficulty stems from the fact that river discharge is not measured continuously. Instead, discharge time series used by hydrologists are usually based on simple stage-discharge relations (rating curves) calibrated using a set of direct stage-discharge measurements (gaugings). In this presentation, we present a Bayesian approach (cf. Le Coz et al., 2014) to build such hydrometric rating curves, to estimate the associated uncertainty and to propagate this uncertainty to discharge time series. The three main steps of this approach are described: (1) Hydraulic analysis: identification of the hydraulic controls that govern the stage-discharge relation, identification of the rating curve equation and specification of prior distributions for the rating curve parameters; (2) Rating curve estimation: Bayesian inference of the rating curve parameters, accounting for the individual uncertainties of available gaugings, which often differ according to the discharge measurement procedure and the flow conditions; (3) Uncertainty propagation: quantification of the uncertainty in discharge time series, accounting for both the rating curve uncertainties and the uncertainty of recorded stage values. The rating curve uncertainties combine the parametric uncertainties and the remnant uncertainties that reflect the limited accuracy of the mathematical model used to simulate the physical stage-discharge relation. In addition, we also discuss current research activities, including the treatment of non-univocal stage-discharge relationships (e.g. due to hydraulic hysteresis, vegetation growth, sudden change of the geometry of the section, etc.). An operational version of the BaRatin software and its graphical interface are made available free of charge on request to the authors. J. Le Coz, B. Renard, L. Bonnifait, F. Branger, R. Le Boursicaud (2014). Combining hydraulic knowledge and uncertain gaugings in the estimation of hydrometric rating curves: a Bayesian approach, Journal of Hydrology, 509, 573-587.

  13. Mass discharge estimation from contaminated sites: Multi-model solutions for assessment of conceptual uncertainty

    NASA Astrophysics Data System (ADS)

    Thomsen, N. I.; Troldborg, M.; McKnight, U. S.; Binning, P. J.; Bjerg, P. L.

    2012-04-01

    Mass discharge estimates are increasingly being used in the management of contaminated sites. Such estimates have proven useful for supporting decisions related to the prioritization of contaminated sites in a groundwater catchment. Potential management options can be categorised as follows: (1) leave as is, (2) clean up, or (3) further investigation needed. However, mass discharge estimates are often very uncertain, which may hamper the management decisions. If option 1 is incorrectly chosen soil and water quality will decrease, threatening or destroying drinking water resources. The risk of choosing option 2 is to spend money on remediating a site that does not pose a problem. Choosing option 3 will often be safest, but may not be the optimal economic solution. Quantification of the uncertainty in mass discharge estimates can therefore greatly improve the foundation for selecting the appropriate management option. The uncertainty of mass discharge estimates depends greatly on the extent of the site characterization. A good approach for uncertainty estimation will be flexible with respect to the investigation level, and account for both parameter and conceptual model uncertainty. We propose a method for quantifying the uncertainty of dynamic mass discharge estimates from contaminant point sources on the local scale. The method considers both parameter and conceptual uncertainty through a multi-model approach. The multi-model approach evaluates multiple conceptual models for the same site. The different conceptual models consider different source characterizations and hydrogeological descriptions. The idea is to include a set of essentially different conceptual models where each model is believed to be realistic representation of the given site, based on the current level of information. Parameter uncertainty is quantified using Monte Carlo simulations. For each conceptual model we calculate a transient mass discharge estimate with uncertainty bounds resulting from the parametric uncertainty. To quantify the conceptual uncertainty from a given site, we combine the outputs from the different conceptual models using Bayesian model averaging. The weight for each model is obtained by integrating available data and expert knowledge using Bayesian belief networks. The multi-model approach is applied to a contaminated site. At the site a DNAPL (dense non aqueous phase liquid) spill consisting of PCE (perchloroethylene) has contaminated a fractured clay till aquitard overlaying a limestone aquifer. The exact shape and nature of the source is unknown and so is the importance of transport in the fractures. The result of the multi-model approach is a visual representation of the uncertainty of the mass discharge estimates for the site which can be used to support the management options.

  14. Uncertainty analysis of gross primary production partitioned from net ecosystem exchange measurements

    NASA Astrophysics Data System (ADS)

    Raj, R.; Hamm, N. A. S.; van der Tol, C.; Stein, A.

    2015-08-01

    Gross primary production (GPP), separated from flux tower measurements of net ecosystem exchange (NEE) of CO2, is used increasingly to validate process-based simulators and remote sensing-derived estimates of simulated GPP at various time steps. Proper validation should include the uncertainty associated with this separation at different time steps. This can be achieved by using a Bayesian framework. In this study, we estimated the uncertainty in GPP at half hourly time steps. We used a non-rectangular hyperbola (NRH) model to separate GPP from flux tower measurements of NEE at the Speulderbos forest site, The Netherlands. The NRH model included the variables that influence GPP, in particular radiation, and temperature. In addition, the NRH model provided a robust empirical relationship between radiation and GPP by including the degree of curvature of the light response curve. Parameters of the NRH model were fitted to the measured NEE data for every 10-day period during the growing season (April to October) in 2009. Adopting a Bayesian approach, we defined the prior distribution of each NRH parameter. Markov chain Monte Carlo (MCMC) simulation was used to update the prior distribution of each NRH parameter. This allowed us to estimate the uncertainty in the separated GPP at half-hourly time steps. This yielded the posterior distribution of GPP at each half hour and allowed the quantification of uncertainty. The time series of posterior distributions thus obtained allowed us to estimate the uncertainty at daily time steps. We compared the informative with non-informative prior distributions of the NRH parameters. The results showed that both choices of prior produced similar posterior distributions GPP. This will provide relevant and important information for the validation of process-based simulators in the future. Furthermore, the obtained posterior distributions of NEE and the NRH parameters are of interest for a range of applications.

  15. A Greenhouse-Gas Information System: Monitoring and Validating Emissions Reporting and Mitigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jonietz, Karl K.; Dimotakis, Paul E.; Rotman, Douglas A.

    2011-09-26

    This study and report focus on attributes of a greenhouse-gas information system (GHGIS) needed to support MRV&V needs. These needs set the function of such a system apart from scientific/research monitoring of GHGs and carbon-cycle systems, and include (not exclusively): the need for a GHGIS that is operational, as required for decision-support; the need for a system that meets specifications derived from imposed requirements; the need for rigorous calibration, verification, and validation (CV&V) standards, processes, and records for all measurement and modeling/data-inversion data; the need to develop and adopt an uncertainty-quantification (UQ) regimen for all measurement and modeling data; andmore » the requirement that GHGIS products can be subjected to third-party questioning and scientific scrutiny. This report examines and assesses presently available capabilities that could contribute to a future GHGIS. These capabilities include sensors and measurement technologies; data analysis and data uncertainty quantification (UQ) practices and methods; and model-based data-inversion practices, methods, and their associated UQ. The report further examines the need for traceable calibration, verification, and validation processes and attached metadata; differences between present science-/research-oriented needs and those that would be required for an operational GHGIS; the development, operation, and maintenance of a GHGIS missions-operations center (GMOC); and the complex systems engineering and integration that would be required to develop, operate, and evolve a future GHGIS.« less

  16. Quantifications of Geomagnetic Storm Impact on TEC and NmF2 during 2013 Mar. event

    NASA Astrophysics Data System (ADS)

    Shim, J. S.; Tsagouri, I.; Goncharenko, L. P.; Mays, M. L.; Taktakishvili, A.; Rastaetter, L.; Kuznetsova, M. M.

    2016-12-01

    We investigate the ionospheric response to 2013 Mar. geomagnetic storm event using GPS TEC, ISR and ionosonde observations in North American sector. In order to quantify variations of TEC and NmF2 (or foF2) due to the storm, we remove the background quiet-time values (e.g., TEC of one day prior to the storm, NmF2 median and average of five quietest days for 30 days prior to the storm). In addition, in order to assess modeling capability of reproducing storm impacts on TEC and NmF2, we compare the observations with various model simulations, which are obtained from empirical, physics-based, and data assimilation models. Further, we investigate how uncertainty in the interplanetary magnetic field (IMF) impacts on TEC and NmF2 during the geomagnetic storm event. For this uncertainty study, we use a physics-based coupled ionosphere-thermosphere model, CTIPe, and solar wind parameters obtained from ensemble of WSA-ENLIL+Cone model simulations. This study has been supported by the Community Coordinated Modeling Center (CCMC) at the Goddard Space Flight Center. Model outputs and observational data used for the study will be permanently posted at the CCMC website (http://ccmc.gsfc.nasa.gov) for the space science communities to use.

  17. Good Models Gone Bad: Quantifying and Predicting Parameter-Induced Climate Model Simulation Failures

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Klein, R.; Tannahill, J.; Brandon, S.; Covey, C. C.; Domyancic, D.; Ivanova, D. P.

    2012-12-01

    Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Statistical analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation failures of the Parallel Ocean Program (POP2). About 8.5% of our POP2 runs failed for numerical reasons at certain combinations of parameter values. We apply support vector machine (SVM) classification from the fields of pattern recognition and machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. The SVM classifiers readily predict POP2 failures in an independent validation ensemble, and are subsequently used to determine the causes of the failures via a global sensitivity analysis. Four parameters related to ocean mixing and viscosity are identified as the major sources of POP2 failures. Our method can be used to improve the robustness of complex scientific models to parameter perturbations and to better steer UQ ensembles. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).

  18. A stochastic approach to uncertainty quantification in residual moveout analysis

    NASA Astrophysics Data System (ADS)

    Johng-Ay, T.; Landa, E.; Dossou-Gbété, S.; Bordes, L.

    2015-06-01

    Oil and gas exploration and production relies usually on the interpretation of a single seismic image, which is obtained from observed data. However, the statistical nature of seismic data and the various approximations and assumptions are sources of uncertainties which may corrupt the evaluation of parameters. The quantification of these uncertainties is a major issue which supposes to help in decisions that have important social and commercial implications. The residual moveout analysis, which is an important step in seismic data processing is usually performed by a deterministic approach. In this paper we discuss a Bayesian approach to the uncertainty analysis.

  19. Dakota Uncertainty Quantification Methods Applied to the CFD code Nek5000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delchini, Marc-Olivier; Popov, Emilian L.; Pointer, William David

    This report presents the state of advancement of a Nuclear Energy Advanced Modeling and Simulation (NEAMS) project to characterize the uncertainty of the computational fluid dynamics (CFD) code Nek5000 using the Dakota package for flows encountered in the nuclear engineering industry. Nek5000 is a high-order spectral element CFD code developed at Argonne National Laboratory for high-resolution spectral-filtered large eddy simulations (LESs) and unsteady Reynolds-averaged Navier-Stokes (URANS) simulations.

  20. Embedded Model Error Representation and Propagation in Climate Models

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.

    2017-12-01

    Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.

  1. Report on an Informal Survey of Groundwater Modeling Practitioners About How They Quantify Uncertainty: Which Tools They Use, Why, and Why Not.

    NASA Astrophysics Data System (ADS)

    Ginn, T. R.; Scheibe, T. D.

    2006-12-01

    Hydrogeology is among the most data-limited of the earth sciences, so that uncertainty arises in every aspect of subsurface flow and transport modeling, from conceptual model to spatial discretization to parameter values. Thus treatment of uncertainty is unavoidable, and the literature and conference proceedings are replete with approaches, templates, paradigms and such for doing so. However, such tools remain not well used, especially those of the stochastic analytic sort, leading recently to explicit inquiries about why this is the case, in response to which entire journal issues have been dedicated. In an effort to continue this discussion in a constructive way we report on an informal yet extensive survey of hydrogeology practitioners, as the "marketplace" for techniques to deal with uncertainty. We include scientists, engineers, regulators, and others in the survey, that reports on quantitative (or not) methods for uncertainty characterization and analysis, frequency and level of usage, and reasons behind the selection or avoidance of available methods. Results shed light on fruitful directions for future research in uncertainty quantification in hydrogeology.

  2. Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier–Stokes simulations: A data-driven, physics-informed Bayesian approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, H., E-mail: hengxiao@vt.edu; Wu, J.-L.; Wang, J.-X.

    Despite their well-known limitations, Reynolds-Averaged Navier–Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As RANS models are used in the design and safety evaluation of many mission-critical systems such as airplanes and nuclear power plants, quantifying their model-form uncertainties has significant implications in enabling risk-informed decision-making. In this work we develop a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations.more » Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Both cases are challenging for standard RANS turbulence models. Simulation results suggest that, even with very sparse observations, the obtained posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, where prior knowledge and details of the models are not exploited. This approach has potential implications in many fields in which the governing equations are well understood but the model uncertainty comes from unresolved physical processes. - Highlights: • Proposed a physics–informed framework to quantify uncertainty in RANS simulations. • Framework incorporates physical prior knowledge and observation data. • Based on a rigorous Bayesian framework yet fully utilizes physical model. • Applicable for many complex physical systems beyond turbulent flows.« less

  3. An overview of methods to identify and manage uncertainty for modelling problems in the water-environment-agriculture cross-sector

    DOE PAGES

    Jakeman, Anthony J.; Jakeman, John Davis

    2018-03-14

    Uncertainty pervades the representation of systems in the water–environment–agriculture cross-sector. Successful methods to address uncertainties have largely focused on standard mathematical formulations of biophysical processes in a single sector, such as partial or ordinary differential equations. More attention to integrated models of such systems is warranted. Model components representing the different sectors of an integrated model can have less standard, and different, formulations to one another, as well as different levels of epistemic knowledge and data informativeness. Thus, uncertainty is not only pervasive but also crosses boundaries and propagates between system components. Uncertainty assessment (UA) cries out for more eclecticmore » treatment in these circumstances, some of it being more qualitative and empirical. Here in this paper, we discuss the various sources of uncertainty in such a cross-sectoral setting and ways to assess and manage them. We have outlined a fast-growing set of methodologies, particularly in the computational mathematics literature on uncertainty quantification (UQ), that seem highly pertinent for uncertainty assessment. There appears to be considerable scope for advancing UA by integrating relevant UQ techniques into cross-sectoral problem applications. Of course this will entail considerable collaboration between domain specialists who often take first ownership of the problem and computational methods experts.« less

  4. An overview of methods to identify and manage uncertainty for modelling problems in the water-environment-agriculture cross-sector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, Anthony J.; Jakeman, John Davis

    Uncertainty pervades the representation of systems in the water–environment–agriculture cross-sector. Successful methods to address uncertainties have largely focused on standard mathematical formulations of biophysical processes in a single sector, such as partial or ordinary differential equations. More attention to integrated models of such systems is warranted. Model components representing the different sectors of an integrated model can have less standard, and different, formulations to one another, as well as different levels of epistemic knowledge and data informativeness. Thus, uncertainty is not only pervasive but also crosses boundaries and propagates between system components. Uncertainty assessment (UA) cries out for more eclecticmore » treatment in these circumstances, some of it being more qualitative and empirical. Here in this paper, we discuss the various sources of uncertainty in such a cross-sectoral setting and ways to assess and manage them. We have outlined a fast-growing set of methodologies, particularly in the computational mathematics literature on uncertainty quantification (UQ), that seem highly pertinent for uncertainty assessment. There appears to be considerable scope for advancing UA by integrating relevant UQ techniques into cross-sectoral problem applications. Of course this will entail considerable collaboration between domain specialists who often take first ownership of the problem and computational methods experts.« less

  5. iTOUGH2 V6.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, Stefan A.

    2010-11-01

    iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional , multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. It performs sensitivity analysis, parameter estimation, and uncertainty propagation, analysis in geosciences and reservoir engineering and other application areas. It supports a number of different combination of fluids and components [equation-of-state (EOS) modules]. In addition, the optimization routines implemented in iTOUGH2 can also be used or sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files. This link is achieved by means of the PEST application programmingmore » interface. iTOUGH2 solves the inverse problem by minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative fee, gradient-based and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlos simulation for uncertainty propagation analysis. A detailed residual and error analysis is provided. This upgrade includes new EOS modules (specifically EOS7c, ECO2N and TMVOC), hysteretic relative permeability and capillary pressure functions and the PEST API. More details can be found at http://esd.lbl.gov/iTOUGH2 and the publications cited there. Hardware Req.: Multi-platform; Related/auxiliary software PVM (if running in parallel).« less

  6. iTOUGH2 v7.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINSTERLE, STEFAN; JUNG, YOOJIN; KOWALSKY, MICHAEL

    2016-09-15

    iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional, multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. iTOUGH2 performs sensitivity analyses, data-worth analyses, parameter estimation, and uncertainty propagation analyses in geosciences and reservoir engineering and other application areas. iTOUGH2 supports a number of different combinations of fluids and components (equation-of-state (EOS) modules). In addition, the optimization routines implemented in iTOUGH2 can also be used for sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files using the PEST protocol. iTOUGH2 solves the inverse problem bymore » minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative-free, gradient-based, and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlo simulations for uncertainty propagation analyses. A detailed residual and error analysis is provided. This upgrade includes (a) global sensitivity analysis methods, (b) dynamic memory allocation (c) additional input features and output analyses, (d) increased forward simulation capabilities, (e) parallel execution on multicore PCs and Linux clusters, and (f) bug fixes. More details can be found at http://esd.lbl.gov/iTOUGH2.« less

  7. Nuclear Data Uncertainty Quantification: Past, Present and Future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, D. L.

    2015-01-01

    An historical overview is provided of the mathematical foundations of uncertainty quantification and the roles played in the more recent past by nuclear data uncertainties in nuclear data evaluations and nuclear applications. Significant advances that have established the mathematical framework for contemporary nuclear data evaluation methods, as well as the use of uncertainty information in nuclear data evaluation and nuclear applications, are described. This is followed by a brief examination of the current status concerning nuclear data evaluation methodology, covariance data generation, and the application of evaluated nuclear data uncertainties in contemporary nuclear technology. A few possible areas for futuremore » investigation of this subject are also suggested.« less

  8. Nuclear Data Uncertainty Quantification: Past, Present and Future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, D.L., E-mail: Donald.L.Smith@anl.gov

    2015-01-15

    An historical overview is provided of the mathematical foundations of uncertainty quantification and the roles played in the more recent past by nuclear data uncertainties in nuclear data evaluations and nuclear applications. Significant advances that have established the mathematical framework for contemporary nuclear data evaluation methods, as well as the use of uncertainty information in nuclear data evaluation and nuclear applications, are described. This is followed by a brief examination of the current status concerning nuclear data evaluation methodology, covariance data generation, and the application of evaluated nuclear data uncertainties in contemporary nuclear technology. A few possible areas for futuremore » investigation of this subject are also suggested.« less

  9. Non-intrusive uncertainty quantification of computational fluid dynamics simulations: notes on the accuracy and efficiency

    NASA Astrophysics Data System (ADS)

    Zimoń, Małgorzata; Sawko, Robert; Emerson, David; Thompson, Christopher

    2017-11-01

    Uncertainty quantification (UQ) is increasingly becoming an indispensable tool for assessing the reliability of computational modelling. Efficient handling of stochastic inputs, such as boundary conditions, physical properties or geometry, increases the utility of model results significantly. We discuss the application of non-intrusive generalised polynomial chaos techniques in the context of fluid engineering simulations. Deterministic and Monte Carlo integration rules are applied to a set of problems, including ordinary differential equations and the computation of aerodynamic parameters subject to random perturbations. In particular, we analyse acoustic wave propagation in a heterogeneous medium to study the effects of mesh resolution, transients, number and variability of stochastic inputs. We consider variants of multi-level Monte Carlo and perform a novel comparison of the methods with respect to numerical and parametric errors, as well as computational cost. The results provide a comprehensive view of the necessary steps in UQ analysis and demonstrate some key features of stochastic fluid flow systems.

  10. Uncertainty quantification and propagation in nuclear density functional theory

    DOE PAGES

    Schunck, N.; McDonnell, J. D.; Higdon, D.; ...

    2015-12-23

    Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going eff orts seek to better root nuclear DFT in the theory of nuclear forces, energy functionals remain semi-phenomenological constructions that depend on a set of parameters adjusted to experimental data in fi nite nuclei. In this study, we review recent eff orts to quantify the related uncertainties, and propagate them to model predictions. In particular, we cover the topics of parameter estimation for inverse problems, statisticalmore » analysis of model uncertainties and Bayesian inference methods. Illustrative examples are taken from the literature.« less

  11. The Zugspitze radiative closure experiment for quantifying water vapor absorption over the terrestrial and solar infrared - Part 1: Setup, uncertainty analysis, and assessment of far-infrared water vapor continuum

    NASA Astrophysics Data System (ADS)

    Sussmann, Ralf; Reichert, Andreas; Rettinger, Markus

    2016-09-01

    Quantitative knowledge of water vapor radiative processes in the atmosphere throughout the terrestrial and solar infrared spectrum is still incomplete even though this is crucial input to the radiation codes forming the core of both remote sensing methods and climate simulations. Beside laboratory spectroscopy, ground-based remote sensing field studies in the context of so-called radiative closure experiments are a powerful approach because this is the only way to quantify water absorption under cold atmospheric conditions. For this purpose, we have set up at the Zugspitze (47.42° N, 10.98° E; 2964 m a.s.l.) a long-term radiative closure experiment designed to cover the infrared spectrum between 400 and 7800 cm-1 (1.28-25 µm). As a benefit for such experiments, the atmospheric states at the Zugspitze frequently comprise very low integrated water vapor (IWV; minimum = 0.1 mm, median = 2.3 mm) and very low aerosol optical depth (AOD = 0.0024-0.0032 at 7800 cm-1 at air mass 1). All instruments for radiance measurements and atmospheric-state measurements are described along with their measurement uncertainties. Based on all parameter uncertainties and the corresponding radiance Jacobians, a systematic residual radiance uncertainty budget has been set up to characterize the sensitivity of the radiative closure over the whole infrared spectral range. The dominant uncertainty contribution in the spectral windows used for far-infrared (FIR) continuum quantification is from IWV uncertainties, while T profile uncertainties dominate in the mid-infrared (MIR). Uncertainty contributions to near-infrared (NIR) radiance residuals are dominated by water vapor line parameters in the vicinity of the strong water vapor bands. The window regions in between these bands are dominated by solar Fourier transform infrared (FTIR) calibration uncertainties at low NIR wavenumbers, while uncertainties due to AOD become an increasing and dominant contribution towards higher NIR wavenumbers. Exceptions are methane or nitrous oxide bands in the NIR, where the associated line parameter uncertainties dominate the overall uncertainty. As a first demonstration of the Zugspitze closure experiment, a water vapor continuum quantification in the FIR spectral region (400-580 cm-1) has been performed. The resulting FIR foreign-continuum coefficients are consistent with the MT_CKD 2.5.2 continuum model and also agree with the most recent atmospheric closure study carried out in Antarctica. Results from the first determination of the NIR water vapor continuum in a field experiment are detailed in a companion paper (Reichert and Sussmann, 2016) while a novel NIR calibration scheme for the underlying FTIR measurements of incoming solar radiance is presented in another companion paper (Reichert et al., 2016).

  12. Scalable hierarchical PDE sampler for generating spatially correlated random fields using nonmatching meshes: Scalable hierarchical PDE sampler using nonmatching meshes

    DOE PAGES

    Osborn, Sarah; Zulian, Patrick; Benson, Thomas; ...

    2018-01-30

    This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less

  13. Scalable hierarchical PDE sampler for generating spatially correlated random fields using nonmatching meshes: Scalable hierarchical PDE sampler using nonmatching meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Sarah; Zulian, Patrick; Benson, Thomas

    This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less

  14. Bayesian flood forecasting methods: A review

    NASA Astrophysics Data System (ADS)

    Han, Shasha; Coulibaly, Paulin

    2017-08-01

    Over the past few decades, floods have been seen as one of the most common and largely distributed natural disasters in the world. If floods could be accurately forecasted in advance, then their negative impacts could be greatly minimized. It is widely recognized that quantification and reduction of uncertainty associated with the hydrologic forecast is of great importance for flood estimation and rational decision making. Bayesian forecasting system (BFS) offers an ideal theoretic framework for uncertainty quantification that can be developed for probabilistic flood forecasting via any deterministic hydrologic model. It provides suitable theoretical structure, empirically validated models and reasonable analytic-numerical computation method, and can be developed into various Bayesian forecasting approaches. This paper presents a comprehensive review on Bayesian forecasting approaches applied in flood forecasting from 1999 till now. The review starts with an overview of fundamentals of BFS and recent advances in BFS, followed with BFS application in river stage forecasting and real-time flood forecasting, then move to a critical analysis by evaluating advantages and limitations of Bayesian forecasting methods and other predictive uncertainty assessment approaches in flood forecasting, and finally discusses the future research direction in Bayesian flood forecasting. Results show that the Bayesian flood forecasting approach is an effective and advanced way for flood estimation, it considers all sources of uncertainties and produces a predictive distribution of the river stage, river discharge or runoff, thus gives more accurate and reliable flood forecasts. Some emerging Bayesian forecasting methods (e.g. ensemble Bayesian forecasting system, Bayesian multi-model combination) were shown to overcome limitations of single model or fixed model weight and effectively reduce predictive uncertainty. In recent years, various Bayesian flood forecasting approaches have been developed and widely applied, but there is still room for improvements. Future research in the context of Bayesian flood forecasting should be on assimilation of various sources of newly available information and improvement of predictive performance assessment methods.

  15. DAKOTA Design Analysis Kit for Optimization and Terascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.

    2010-02-24

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less

  16. ROMUSE 2.0 User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khuwaileh, Bassam; Turinsky, Paul; Williams, Brian J.

    2016-10-04

    ROMUSE (Reduced Order Modeling Based Uncertainty/Sensitivity Estimator) is an effort within the Consortium for Advanced Simulation of Light water reactors (CASL) to provide an analysis tool to be used in conjunction with reactor core simulators, especially the Virtual Environment for Reactor Applications (VERA). ROMUSE is written in C++ and is currently capable of performing various types of parameters perturbations, uncertainty quantification, surrogate models construction and subspace analysis. Version 2.0 has the capability to interface with DAKOTA which gives ROMUSE access to the various algorithms implemented within DAKOTA. ROMUSE is mainly designed to interface with VERA and the Comprehensive Modeling andmore » Simulation Suite for Nuclear Safety Analysis and Design (SCALE) [1,2,3], however, ROMUSE can interface with any general model (e.g. python and matlab) with Input/Output (I/O) format that follows the Hierarchical Data Format 5 (HDF5). In this brief user manual, the use of ROMUSE will be overviewed and example problems will be presented and briefly discussed. The algorithms provided here range from algorithms inspired by those discussed in Ref.[4] to nuclear-specific algorithms discussed in Ref. [3].« less

  17. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Schröter, Kai; Merz, Bruno

    2016-05-01

    Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  18. Risk-based decision making to manage water quality failures caused by combined sewer overflows

    NASA Astrophysics Data System (ADS)

    Sriwastava, A. K.; Torres-Matallana, J. A.; Tait, S.; Schellart, A.

    2017-12-01

    Regulatory authorities set certain environmental permit for water utilities such that the combined sewer overflows (CSO) managed by these companies conform to the regulations. These utility companies face the risk of paying penalty or negative publicity in case they breach the environmental permit. These risks can be addressed by designing appropriate solutions such as investing in additional infrastructure which improve the system capacity and reduce the impact of CSO spills. The performance of these solutions is often estimated using urban drainage models. Hence, any uncertainty in these models can have a significant effect on the decision making process. This study outlines a risk-based decision making approach to address water quality failure caused by CSO spills. A calibrated lumped urban drainage model is used to simulate CSO spill quality in Haute-Sûre catchment in Luxembourg. Uncertainty in rainfall and model parameters is propagated through Monte Carlo simulations to quantify uncertainty in the concentration of ammonia in the CSO spill. A combination of decision alternatives such as the construction of a storage tank at the CSO and the reduction in the flow contribution of catchment surfaces are selected as planning measures to avoid the water quality failure. Failure is defined as exceedance of a concentration-duration based threshold based on Austrian emission standards for ammonia (De Toffol, 2006) with a certain frequency. For each decision alternative, uncertainty quantification results into a probability distribution of the number of annual CSO spill events which exceed the threshold. For each alternative, a buffered failure probability as defined in Rockafellar & Royset (2010), is estimated. Buffered failure probability (pbf) is a conservative estimate of failure probability (pf), however, unlike failure probability, it includes information about the upper tail of the distribution. A pareto-optimal set of solutions is obtained by performing mean- pbf optimization. The effectiveness of using buffered failure probability compared to the failure probability is tested by comparing the solutions obtained by using mean-pbf and mean-pf optimizations.

  19. Uncertainty quantification tools for multiphase gas-solid flow simulations using MFIX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, Rodney O.; Passalacqua, Alberto

    2016-02-01

    Computational fluid dynamics (CFD) has been widely studied and used in the scientific community and in the industry. Various models were proposed to solve problems in different areas. However, all models deviate from reality. Uncertainty quantification (UQ) process evaluates the overall uncertainties associated with the prediction of quantities of interest. In particular it studies the propagation of input uncertainties to the outputs of the models so that confidence intervals can be provided for the simulation results. In the present work, a non-intrusive quadrature-based uncertainty quantification (QBUQ) approach is proposed. The probability distribution function (PDF) of the system response can bemore » then reconstructed using extended quadrature method of moments (EQMOM) and extended conditional quadrature method of moments (ECQMOM). The report first explains the theory of QBUQ approach, including methods to generate samples for problems with single or multiple uncertain input parameters, low order statistics, and required number of samples. Then methods for univariate PDF reconstruction (EQMOM) and multivariate PDF reconstruction (ECQMOM) are explained. The implementation of QBUQ approach into the open-source CFD code MFIX is discussed next. At last, QBUQ approach is demonstrated in several applications. The method is first applied to two examples: a developing flow in a channel with uncertain viscosity, and an oblique shock problem with uncertain upstream Mach number. The error in the prediction of the moment response is studied as a function of the number of samples, and the accuracy of the moments required to reconstruct the PDF of the system response is discussed. The QBUQ approach is then demonstrated by considering a bubbling fluidized bed as example application. The mean particle size is assumed to be the uncertain input parameter. The system is simulated with a standard two-fluid model with kinetic theory closures for the particulate phase implemented into MFIX. The effect of uncertainty on the disperse-phase volume fraction, on the phase velocities and on the pressure drop inside the fluidized bed are examined, and the reconstructed PDFs are provided for the three quantities studied. Then the approach is applied to a bubbling fluidized bed with two uncertain parameters, particle-particle and particle-wall restitution coefficients. Contour plots of the mean and standard deviation of solid volume fraction, solid phase velocities and gas pressure are provided. The PDFs of the response are reconstructed using EQMOM with appropriate kernel density functions. The simulation results are compared to experimental data provided by the 2013 NETL small-scale challenge problem. Lastly, the proposed procedure is demonstrated by considering a riser of a circulating fluidized bed as an example application. The mean particle size is considered to be the uncertain input parameter. Contour plots of the mean and standard deviation of solid volume fraction, solid phase velocities, and granular temperature are provided. Mean values and confidence intervals of the quantities of interest are compared to the experiment results. The univariate and bivariate PDF reconstructions of the system response are performed using EQMOM and ECQMOM.« less

  20. Sensitivity of land surface modeling to parameters: An uncertainty quantification method applied to the Community Land Model

    NASA Astrophysics Data System (ADS)

    Ricciuto, D. M.; Mei, R.; Mao, J.; Hoffman, F. M.; Kumar, J.

    2015-12-01

    Uncertainties in land parameters could have important impacts on simulated water and energy fluxes and land surface states, which will consequently affect atmospheric and biogeochemical processes. Therefore, quantification of such parameter uncertainties using a land surface model is the first step towards better understanding of predictive uncertainty in Earth system models. In this study, we applied a random-sampling, high-dimensional model representation (RS-HDMR) method to analyze the sensitivity of simulated photosynthesis, surface energy fluxes and surface hydrological components to selected land parameters in version 4.5 of the Community Land Model (CLM4.5). Because of the large computational expense of conducting ensembles of global gridded model simulations, we used the results of a previous cluster analysis to select one thousand representative land grid cells for simulation. Plant functional type (PFT)-specific uniform prior ranges for land parameters were determined using expert opinion and literature survey, and samples were generated with a quasi-Monte Carlo approach-Sobol sequence. Preliminary analysis of 1024 simulations suggested that four PFT-dependent parameters (including slope of the conductance-photosynthesis relationship, specific leaf area at canopy top, leaf C:N ratio and fraction of leaf N in RuBisco) are the dominant sensitive parameters for photosynthesis, surface energy and water fluxes across most PFTs, but with varying importance rankings. On the other hand, for surface ans sub-surface runoff, PFT-independent parameters, such as the depth-dependent decay factors for runoff, play more important roles than the previous four PFT-dependent parameters. Further analysis by conditioning the results on different seasons and years are being conducted to provide guidance on how climate variability and change might affect such sensitivity. This is the first step toward coupled simulations including biogeochemical processes, atmospheric processes or both to determine the full range of sensitivity of Earth system modeling to land-surface parameters. This can facilitate sampling strategies in measurement campaigns targeted at reduction of climate modeling uncertainties and can also provide guidance on land parameter calibration for simulation optimization.

  1. Probabilistic Prognosis of Non-Planar Fatigue Crack Growth

    NASA Technical Reports Server (NTRS)

    Leser, Patrick E.; Newman, John A.; Warner, James E.; Leser, William P.; Hochhalter, Jacob D.; Yuan, Fuh-Gwo

    2016-01-01

    Quantifying the uncertainty in model parameters for the purpose of damage prognosis can be accomplished utilizing Bayesian inference and damage diagnosis data from sources such as non-destructive evaluation or structural health monitoring. The number of samples required to solve the Bayesian inverse problem through common sampling techniques (e.g., Markov chain Monte Carlo) renders high-fidelity finite element-based damage growth models unusable due to prohibitive computation times. However, these types of models are often the only option when attempting to model complex damage growth in real-world structures. Here, a recently developed high-fidelity crack growth model is used which, when compared to finite element-based modeling, has demonstrated reductions in computation times of three orders of magnitude through the use of surrogate models and machine learning. The model is flexible in that only the expensive computation of the crack driving forces is replaced by the surrogate models, leaving the remaining parameters accessible for uncertainty quantification. A probabilistic prognosis framework incorporating this model is developed and demonstrated for non-planar crack growth in a modified, edge-notched, aluminum tensile specimen. Predictions of remaining useful life are made over time for five updates of the damage diagnosis data, and prognostic metrics are utilized to evaluate the performance of the prognostic framework. Challenges specific to the probabilistic prognosis of non-planar fatigue crack growth are highlighted and discussed in the context of the experimental results.

  2. Pareto-optimal estimates that constrain mean California precipitation change

    NASA Astrophysics Data System (ADS)

    Langenbrunner, B.; Neelin, J. D.

    2017-12-01

    Global climate model (GCM) projections of greenhouse gas-induced precipitation change can exhibit notable uncertainty at the regional scale, particularly in regions where the mean change is small compared to internal variability. This is especially true for California, which is located in a transition zone between robust precipitation increases to the north and decreases to the south, and where GCMs from the Climate Model Intercomparison Project phase 5 (CMIP5) archive show no consensus on mean change (in either magnitude or sign) across the central and southern parts of the state. With the goal of constraining this uncertainty, we apply a multiobjective approach to a large set of subensembles (subsets of models from the full CMIP5 ensemble). These constraints are based on subensemble performance in three fields important to California precipitation: tropical Pacific sea surface temperatures, upper-level zonal winds in the midlatitude Pacific, and precipitation over the state. An evolutionary algorithm is used to sort through and identify the set of Pareto-optimal subensembles across these three measures in the historical climatology, and we use this information to constrain end-of-century California wet season precipitation change. This technique narrows the range of projections throughout the state and increases confidence in estimates of positive mean change. Furthermore, these methods complement and generalize emergent constraint approaches that aim to restrict uncertainty in end-of-century projections, and they have applications to even broader aspects of uncertainty quantification, including parameter sensitivity and model calibration.

  3. Methodological challenges for the evaluation of clinical effectiveness in the context of accelerated regulatory approval: an overview.

    PubMed

    Woolacott, Nerys; Corbett, Mark; Jones-Diette, Julie; Hodgson, Robert

    2017-10-01

    Regulatory authorities are approving innovative therapies with limited evidence. Although this level of data is sufficient for the regulator to establish an acceptable risk-benefit balance, it is problematic for downstream health technology assessment, where assessment of cost-effectiveness requires reliable estimates of effectiveness relative to existing clinical practice. Some key issues associated with a limited evidence base include using data, from nonrandomized studies, from small single-arm trials, or from single-center trials; and using surrogate end points. We examined these methodological challenges through a pragmatic review of the available literature. Methods to adjust nonrandomized studies for confounding are imperfect. The relative treatment effect generated from single-arm trials is uncertain and may be optimistic. Single-center trial results may not be generalizable. Surrogate end points, on average, overestimate treatment effects. Current methods for analyzing such data are limited, and effectiveness claims based on these suboptimal forms of evidence are likely to be subject to significant uncertainty. Assessments of cost-effectiveness, based on the modeling of such data, are likely to be subject to considerable uncertainty. This uncertainty must not be underestimated by decision makers: methods for its quantification are required and schemes to protect payers from the cost of uncertainty should be implemented. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.

  4. Uncertainty quantification in LES of channel flow

    DOE PAGES

    Safta, Cosmin; Blaylock, Myra; Templeton, Jeremy; ...

    2016-07-12

    Here, in this paper, we present a Bayesian framework for estimating joint densities for large eddy simulation (LES) sub-grid scale model parameters based on canonical forced isotropic turbulence direct numerical simulation (DNS) data. The framework accounts for noise in the independent variables, and we present alternative formulations for accounting for discrepancies between model and data. To generate probability densities for flow characteristics, posterior densities for sub-grid scale model parameters are propagated forward through LES of channel flow and compared with DNS data. Synthesis of the calibration and prediction results demonstrates that model parameters have an explicit filter width dependence andmore » are highly correlated. Discrepancies between DNS and calibrated LES results point to additional model form inadequacies that need to be accounted for.« less

  5. Learning the Task Management Space of an Aircraft Approach Model

    NASA Technical Reports Server (NTRS)

    Krall, Joseph; Menzies, Tim; Davies, Misty

    2014-01-01

    Validating models of airspace operations is a particular challenge. These models are often aimed at finding and exploring safety violations, and aim to be accurate representations of real-world behavior. However, the rules governing the behavior are quite complex: nonlinear physics, operational modes, human behavior, and stochastic environmental concerns all determine the responses of the system. In this paper, we present a study on aircraft runway approaches as modeled in Georgia Tech's Work Models that Compute (WMC) simulation. We use a new learner, Genetic-Active Learning for Search-Based Software Engineering (GALE) to discover the Pareto frontiers defined by cognitive structures. These cognitive structures organize the prioritization and assignment of tasks of each pilot during approaches. We discuss the benefits of our approach, and also discuss future work necessary to enable uncertainty quantification.

  6. The development of an hourly gridded rainfall product for hydrological applications in England and Wales

    NASA Astrophysics Data System (ADS)

    Liguori, Sara; O'Loughlin, Fiachra; Souvignet, Maxime; Coxon, Gemma; Freer, Jim; Woods, Ross

    2014-05-01

    This research presents a newly developed observed sub-daily gridded precipitation product for England and Wales. Importantly our analysis specifically allows a quantification of rainfall errors from grid to the catchment scale, useful for hydrological model simulation and the evaluation of prediction uncertainties. Our methodology involves the disaggregation of the current one kilometre daily gridded precipitation records available for the United Kingdom[1]. The hourly product is created using information from: 1) 2000 tipping-bucket rain gauges; and 2) the United Kingdom Met-Office weather radar network. These two independent datasets provide rainfall estimates at temporal resolutions much smaller than the current daily gridded rainfall product; thus allowing the disaggregation of the daily rainfall records to an hourly timestep. Our analysis is conducted for the period 2004 to 2008, limited by the current availability of the datasets. We analyse the uncertainty components affecting the accuracy of this product. Specifically we explore how these uncertainties vary spatially, temporally and with climatic regimes. Preliminary results indicate scope for improvement of hydrological model performance by the utilisation of this new hourly gridded rainfall product. Such product will improve our ability to diagnose and identify structural errors in hydrological modelling by including the quantification of input errors. References [1] Keller V, Young AR, Morris D, Davies H (2006) Continuous Estimation of River Flows. Technical Report: Estimation of Precipitation Inputs. in Agency E (ed.). Environmental Agency.

  7. Uncertainty characterization and quantification in air pollution models. Application to the ADMS-Urban model.

    NASA Astrophysics Data System (ADS)

    Debry, E.; Malherbe, L.; Schillinger, C.; Bessagnet, B.; Rouil, L.

    2009-04-01

    Evaluation of human exposure to atmospheric pollution usually requires the knowledge of pollutants concentrations in ambient air. In the framework of PAISA project, which studies the influence of socio-economical status on relationships between air pollution and short term health effects, the concentrations of gas and particle pollutants are computed over Strasbourg with the ADMS-Urban model. As for any modeling result, simulated concentrations come with uncertainties which have to be characterized and quantified. There are several sources of uncertainties related to input data and parameters, i.e. fields used to execute the model like meteorological fields, boundary conditions and emissions, related to the model formulation because of incomplete or inaccurate treatment of dynamical and chemical processes, and inherent to the stochastic behavior of atmosphere and human activities [1]. Our aim is here to assess the uncertainties of the simulated concentrations with respect to input data and model parameters. In this scope the first step consisted in bringing out the input data and model parameters that contribute most effectively to space and time variability of predicted concentrations. Concentrations of several pollutants were simulated for two months in winter 2004 and two months in summer 2004 over five areas of Strasbourg. The sensitivity analysis shows the dominating influence of boundary conditions and emissions. Among model parameters, the roughness and Monin-Obukhov lengths appear to have non neglectable local effects. Dry deposition is also an important dynamic process. The second step of the characterization and quantification of uncertainties consists in attributing a probability distribution to each input data and model parameter and in propagating the joint distribution of all data and parameters into the model so as to associate a probability distribution to the modeled concentrations. Several analytical and numerical methods exist to perform an uncertainty analysis. We chose the Monte Carlo method which has already been applied to atmospheric dispersion models [2, 3, 4]. The main advantage of this method is to be insensitive to the number of perturbed parameters but its drawbacks are its computation cost and its slow convergence. In order to speed up this one we used the method of antithetic variable which takes adavantage of the symmetry of probability laws. The air quality model simulations were carried out by the Association for study and watching of Atmospheric Pollution in Alsace (ASPA). The output concentrations distributions can then be updated with a Bayesian method. This work is part of an INERIS Research project also aiming at assessing the uncertainty of the CHIMERE dispersion model used in the Prev'Air forecasting platform (www.prevair.org) in order to deliver more accurate predictions. (1) Rao, K.S. Uncertainty Analysis in Atmospheric Dispersion Modeling, Pure and Applied Geophysics, 2005, 162, 1893-1917. (2) Beekmann, M. and Derognat, C. Monte Carlo uncertainty analysis of a regional-scale transport chemistry model constrained by measurements from the Atmospheric Pollution Over the PAris Area (ESQUIF) campaign, Journal of Geophysical Research, 2003, 108, 8559-8576. (3) Hanna, S.R. and Lu, Z. and Frey, H.C. and Wheeler, N. and Vukovich, J. and Arunachalam, S. and Fernau, M. and Hansen, D.A. Uncertainties in predicted ozone concentrations due to input uncertainties for the UAM-V photochemical grid model applied to the July 1995 OTAG domain, Atmospheric Environment, 2001, 35, 891-903. (4) Romanowicz, R. and Higson, H. and Teasdale, I. Bayesian uncertainty estimation methodology applied to air pollution modelling, Environmetrics, 2000, 11, 351-371.

  8. An info-gap application to robust design of a prestressed space structure under epistemic uncertainties

    NASA Astrophysics Data System (ADS)

    Hot, Aurélien; Weisser, Thomas; Cogan, Scott

    2017-07-01

    Uncertainty quantification is an integral part of the model validation process and is important to take into account during the design of mechanical systems. Sources of uncertainty are diverse but generally fall into two categories: aleatory due to random process and epistemic resulting from a lack of knowledge. This work focuses on the behavior of solar arrays in their stowed configuration. To avoid impacts during launch, snubbers are used to prestress the panels. Since the mechanical properties of the snubbers and the associated preload configurations are difficult to characterize precisely, an info-gap approach is proposed to investigate the influence of such uncertainties on design configurations obtained for different values of safety factors. This eventually allows to revise the typical values of these factors and to reevaluate them with respect to a targeted robustness level. The proposed methodology is illustrated using a simplified finite element model of a solar array.

  9. Sensitivity Analysis and Uncertainty Quantification for the LAMMPS Molecular Dynamics Simulation Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Picard, Richard Roy; Bhat, Kabekode Ghanasham

    2017-07-18

    We examine sensitivity analysis and uncertainty quantification for molecular dynamics simulation. Extreme (large or small) output values for the LAMMPS code often occur at the boundaries of input regions, and uncertainties in those boundary values are overlooked by common SA methods. Similarly, input values for which code outputs are consistent with calibration data can also occur near boundaries. Upon applying approaches in the literature for imprecise probabilities (IPs), much more realistic results are obtained than for the complacent application of standard SA and code calibration.

  10. Numerical Uncertainty Quantification for Radiation Analysis Tools

    NASA Technical Reports Server (NTRS)

    Anderson, Brooke; Blattnig, Steve; Clowdsley, Martha

    2007-01-01

    Recently a new emphasis has been placed on engineering applications of space radiation analyses and thus a systematic effort of Verification, Validation and Uncertainty Quantification (VV&UQ) of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. There are two sources of uncertainty in geometric discretization addressed in this paper that need to be quantified in order to understand the total uncertainty in estimating space radiation exposures. One source of uncertainty is in ray tracing, as the number of rays increase the associated uncertainty decreases, but the computational expense increases. Thus, a cost benefit analysis optimizing computational time versus uncertainty is needed and is addressed in this paper. The second source of uncertainty results from the interpolation over the dose vs. depth curves that is needed to determine the radiation exposure. The question, then, is what is the number of thicknesses that is needed to get an accurate result. So convergence testing is performed to quantify the uncertainty associated with interpolating over different shield thickness spatial grids.

  11. Endophysical Models Based on Empirical Data

    NASA Astrophysics Data System (ADS)

    Jahn, Robert G.; Dunne, Brenda J.

    2005-10-01

    Any proposed endophysical models need to acknowledge a number of subjective correlates that have been well established in such objectively quantifiable experimental contexts as anomalous human/machine interactions and remote perception information acquisition. Most notable of these factors are conscious and unconscious intention; gender disparities; serial position effects; intrinsic uncertainties; elusive replicability; and emotional resonance between the participants and the devices, process, and tasks. Perhaps even more pertinent are the insensitivities of the anomalous effects to spatial and temporal separations of the participants from the physical targets. Inclusion of subjective coordinates in the models, and exclusion of physical distance and time, raise formidable issues of specification, quantification, and dynamical formulation from both the physical and psychological perspectives. A few primitive examples of possible approaches are presented.

  12. Development and validation of an in-line NIR spectroscopic method for continuous blend potency determination in the feed frame of a tablet press.

    PubMed

    De Leersnyder, Fien; Peeters, Elisabeth; Djalabi, Hasna; Vanhoorne, Valérie; Van Snick, Bernd; Hong, Ke; Hammond, Stephen; Liu, Angela Yang; Ziemons, Eric; Vervaet, Chris; De Beer, Thomas

    2018-03-20

    A calibration model for in-line API quantification based on near infrared (NIR) spectra collection during tableting in the tablet press feed frame was developed and validated. First, the measurement set-up was optimised and the effect of filling degree of the feed frame on the NIR spectra was investigated. Secondly, a predictive API quantification model was developed and validated by calculating the accuracy profile based on the analysis results of validation experiments. Furthermore, based on the data of the accuracy profile, the measurement uncertainty was determined. Finally, the robustness of the API quantification model was evaluated. An NIR probe (SentroPAT FO) was implemented into the feed frame of a rotary tablet press (Modul™ P) to monitor physical mixtures of a model API (sodium saccharine) and excipients with two different API target concentrations: 5 and 20% (w/w). Cutting notches into the paddle wheel fingers did avoid disturbances of the NIR signal caused by the rotating paddle wheel fingers and hence allowed better and more complete feed frame monitoring. The effect of the design of the notched paddle wheel fingers was also investigated and elucidated that straight paddle wheel fingers did cause less variation in NIR signal compared to curved paddle wheel fingers. The filling degree of the feed frame was reflected in the raw NIR spectra. Several different calibration models for the prediction of the API content were developed, based on the use of single spectra or averaged spectra, and using partial least squares (PLS) regression or ratio models. These predictive models were then evaluated and validated by processing physical mixtures with different API concentrations not used in the calibration models (validation set). The β-expectation tolerance intervals were calculated for each model and for each of the validated API concentration levels (β was set at 95%). PLS models showed the best predictive performance. For each examined saccharine concentration range (i.e., between 4.5 and 6.5% and between 15 and 25%), at least 95% of future measurements will not deviate more than 15% from the true value. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Uncertainty quantification in volumetric Particle Image Velocimetry

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sayantan; Charonko, John; Vlachos, Pavlos

    2016-11-01

    Particle Image Velocimetry (PIV) uncertainty quantification is challenging due to coupled sources of elemental uncertainty and complex data reduction procedures in the measurement chain. Recent developments in this field have led to uncertainty estimation methods for planar PIV. However, no framework exists for three-dimensional volumetric PIV. In volumetric PIV the measurement uncertainty is a function of reconstructed three-dimensional particle location that in turn is very sensitive to the accuracy of the calibration mapping function. Furthermore, the iterative correction to the camera mapping function using triangulated particle locations in space (volumetric self-calibration) has its own associated uncertainty due to image noise and ghost particle reconstructions. Here we first quantify the uncertainty in the triangulated particle position which is a function of particle detection and mapping function uncertainty. The location uncertainty is then combined with the three-dimensional cross-correlation uncertainty that is estimated as an extension of the 2D PIV uncertainty framework. Finally the overall measurement uncertainty is quantified using an uncertainty propagation equation. The framework is tested with both simulated and experimental cases. For the simulated cases the variation of estimated uncertainty with the elemental volumetric PIV error sources are also evaluated. The results show reasonable prediction of standard uncertainty with good coverage.

  14. Surrogate gas prediction model as a proxy for Δ14C-based measurements of fossil fuel-CO2.

    PubMed

    Coakley, Kevin J; Miller, John B; Montzka, Stephen A; Sweeney, Colm; Miller, Ben R

    2016-06-27

    The measured 14 C: 12 C isotopic ratio of atmospheric CO 2 (and its associated derived Δ 14 C value) is an ideal tracer for determination of the fossil fuel derived CO 2 enhancement contributing to any atmospheric CO 2 measurement ( C ff ). Given enough such measurements, independent top-down estimation of US fossil fuel-CO 2 emissions should be possible. However, the number of Δ 14 C measurements is presently constrained by cost, available sample volume, and availability of mass spectrometer measurement facilities. Δ 14 C is therefore measured in just a small fraction of samples obtained by ask air sampling networks around the world. Here, we develop a Projection Pursuit Regression (PPR) model to predict C ff as a function of multiple surrogate gases acquired within the NOAA/ESRL Global Greenhouse Gas Reference Network (GGGRN). The surrogates consist of measured enhancements of various anthropogenic trace gases, including CO, SF 6 , and halo- and hydrocarbons acquired in vertical airborne sampling profiles near Cape May, NJ and Portsmouth, NH from 2005 through 2010. Model performance for these sites is quantified based on predicted values corresponding to test data excluded from the model building process. Chi-square hypothesis test analysis indicates that these predictions and corresponding observations are consistent given our uncertainty budget which accounts for random effects and one particular systematic effect. However, quantification of the combined uncertainty of the prediction due to all relevant systematic effects is difficult because of the limited range of the observations and their relatively high fractional uncertainties at the sampling sites considered here. To account for the possibility of additional systematic effects, we incorporate another component of uncertainty into our budget. Expanding the number of Δ 14 C measurements in the NOAA GGGRN and building new PPR models at additional sites would improve our understanding of uncertainties and potentially increase the number of C ff estimates by approximately a factor of three. Provided that these estimates are of comparable quality to Δ 14 C-based estimates, we expect an improved determination of fossil fuel-CO 2 emissions.

  15. Co-optimization of CO 2 -EOR and Storage Processes under Geological Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ampomah, William; Balch, Robert; Will, Robert

    This paper presents an integrated numerical framework to co-optimize EOR and CO 2 storage performance in the Farnsworth field unit (FWU), Ochiltree County, Texas. The framework includes a field-scale compositional reservoir flow model, an uncertainty quantification model and a neural network optimization process. The reservoir flow model has been constructed based on the field geophysical, geological, and engineering data. A laboratory fluid analysis was tuned to an equation of state and subsequently used to predict the thermodynamic minimum miscible pressure (MMP). A history match of primary and secondary recovery processes was conducted to estimate the reservoir and multiphase flow parametersmore » as the baseline case for analyzing the effect of recycling produced gas, infill drilling and water alternating gas (WAG) cycles on oil recovery and CO 2 storage. A multi-objective optimization model was defined for maximizing both oil recovery and CO 2 storage. The uncertainty quantification model comprising the Latin Hypercube sampling, Monte Carlo simulation, and sensitivity analysis, was used to study the effects of uncertain variables on the defined objective functions. Uncertain variables such as bottom hole injection pressure, WAG cycle, injection and production group rates, and gas-oil ratio among others were selected. The most significant variables were selected as control variables to be used for the optimization process. A neural network optimization algorithm was utilized to optimize the objective function both with and without geological uncertainty. The vertical permeability anisotropy (Kv/Kh) was selected as one of the uncertain parameters in the optimization process. The simulation results were compared to a scenario baseline case that predicted CO 2 storage of 74%. The results showed an improved approach for optimizing oil recovery and CO 2 storage in the FWU. The optimization process predicted more than 94% of CO 2 storage and most importantly about 28% of incremental oil recovery. The sensitivity analysis reduced the number of control variables to decrease computational time. A risk aversion factor was used to represent results at various confidence levels to assist management in the decision-making process. The defined objective functions were proved to be a robust approach to co-optimize oil recovery and CO 2 storage. The Farnsworth CO 2 project will serve as a benchmark for future CO 2–EOR or CCUS projects in the Anadarko basin or geologically similar basins throughout the world.« less

  16. Co-optimization of CO 2 -EOR and Storage Processes under Geological Uncertainty

    DOE PAGES

    Ampomah, William; Balch, Robert; Will, Robert; ...

    2017-07-01

    This paper presents an integrated numerical framework to co-optimize EOR and CO 2 storage performance in the Farnsworth field unit (FWU), Ochiltree County, Texas. The framework includes a field-scale compositional reservoir flow model, an uncertainty quantification model and a neural network optimization process. The reservoir flow model has been constructed based on the field geophysical, geological, and engineering data. A laboratory fluid analysis was tuned to an equation of state and subsequently used to predict the thermodynamic minimum miscible pressure (MMP). A history match of primary and secondary recovery processes was conducted to estimate the reservoir and multiphase flow parametersmore » as the baseline case for analyzing the effect of recycling produced gas, infill drilling and water alternating gas (WAG) cycles on oil recovery and CO 2 storage. A multi-objective optimization model was defined for maximizing both oil recovery and CO 2 storage. The uncertainty quantification model comprising the Latin Hypercube sampling, Monte Carlo simulation, and sensitivity analysis, was used to study the effects of uncertain variables on the defined objective functions. Uncertain variables such as bottom hole injection pressure, WAG cycle, injection and production group rates, and gas-oil ratio among others were selected. The most significant variables were selected as control variables to be used for the optimization process. A neural network optimization algorithm was utilized to optimize the objective function both with and without geological uncertainty. The vertical permeability anisotropy (Kv/Kh) was selected as one of the uncertain parameters in the optimization process. The simulation results were compared to a scenario baseline case that predicted CO 2 storage of 74%. The results showed an improved approach for optimizing oil recovery and CO 2 storage in the FWU. The optimization process predicted more than 94% of CO 2 storage and most importantly about 28% of incremental oil recovery. The sensitivity analysis reduced the number of control variables to decrease computational time. A risk aversion factor was used to represent results at various confidence levels to assist management in the decision-making process. The defined objective functions were proved to be a robust approach to co-optimize oil recovery and CO 2 storage. The Farnsworth CO 2 project will serve as a benchmark for future CO 2–EOR or CCUS projects in the Anadarko basin or geologically similar basins throughout the world.« less

  17. Modeling and Quantification of Team Performance in Human Reliability Analysis for Probabilistic Risk Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey C. JOe; Ronald L. Boring

    Probabilistic Risk Assessment (PRA) and Human Reliability Assessment (HRA) are important technical contributors to the United States (U.S.) Nuclear Regulatory Commission’s (NRC) risk-informed and performance based approach to regulating U.S. commercial nuclear activities. Furthermore, all currently operating commercial NPPs in the U.S. are required by federal regulation to be staffed with crews of operators. Yet, aspects of team performance are underspecified in most HRA methods that are widely used in the nuclear industry. There are a variety of "emergent" team cognition and teamwork errors (e.g., communication errors) that are 1) distinct from individual human errors, and 2) important to understandmore » from a PRA perspective. The lack of robust models or quantification of team performance is an issue that affects the accuracy and validity of HRA methods and models, leading to significant uncertainty in estimating HEPs. This paper describes research that has the objective to model and quantify team dynamics and teamwork within NPP control room crews for risk informed applications, thereby improving the technical basis of HRA, which improves the risk-informed approach the NRC uses to regulate the U.S. commercial nuclear industry.« less

  18. Dakota Graphical User Interface v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman-Hill, Ernest; Glickman, Matthew; Gibson, Marcus

    Graphical analysis environment for Sandia’s Dakota software for optimization and uncertainty quantification. The Dakota GUI is an interactive graphical analysis environment for creating, running, and interpreting Dakota optimization and uncertainty quantification studies. It includes problem (Dakota study) set-up, option specification, simulation interfacing, analysis execution, and results visualization. Through the use of wizards, templates, and views, Dakota GUI helps uses navigate Dakota’s complex capability landscape.

  19. A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    DTIC Science & Technology

    2014-04-01

    Barrier methods for critical exponent problems in geometric analysis and mathematical physics, J. Erway and M. Holst, Submitted for publication ...TR-14-33 A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics...Problems Approved for public release, distribution is unlimited. April 2014 HDTRA1-09-1-0036 Donald Estep and Michael

  20. Sensitivity and uncertainty of input sensor accuracy for grass-based reference evapotranspiration

    USDA-ARS?s Scientific Manuscript database

    Quantification of evapotranspiration (ET) in agricultural environments is becoming of increasing importance throughout the world, thus understanding input variability of relevant sensors is of paramount importance as well. The Colorado Agricultural and Meteorological Network (CoAgMet) and the Florid...

  1. Bayesian calibration of mechanistic aquatic biogeochemical models and benefits for environmental management

    NASA Astrophysics Data System (ADS)

    Arhonditsis, George B.; Papantou, Dimitra; Zhang, Weitao; Perhar, Gurbir; Massos, Evangelia; Shi, Molu

    2008-09-01

    Aquatic biogeochemical models have been an indispensable tool for addressing pressing environmental issues, e.g., understanding oceanic response to climate change, elucidation of the interplay between plankton dynamics and atmospheric CO 2 levels, and examination of alternative management schemes for eutrophication control. Their ability to form the scientific basis for environmental management decisions can be undermined by the underlying structural and parametric uncertainty. In this study, we outline how we can attain realistic predictive links between management actions and ecosystem response through a probabilistic framework that accommodates rigorous uncertainty analysis of a variety of error sources, i.e., measurement error, parameter uncertainty, discrepancy between model and natural system. Because model uncertainty analysis essentially aims to quantify the joint probability distribution of model parameters and to make inference about this distribution, we believe that the iterative nature of Bayes' Theorem is a logical means to incorporate existing knowledge and update the joint distribution as new information becomes available. The statistical methodology begins with the characterization of parameter uncertainty in the form of probability distributions, then water quality data are used to update the distributions, and yield posterior parameter estimates along with predictive uncertainty bounds. Our illustration is based on a six state variable (nitrate, ammonium, dissolved organic nitrogen, phytoplankton, zooplankton, and bacteria) ecological model developed for gaining insight into the mechanisms that drive plankton dynamics in a coastal embayment; the Gulf of Gera, Island of Lesvos, Greece. The lack of analytical expressions for the posterior parameter distributions was overcome using Markov chain Monte Carlo simulations; a convenient way to obtain representative samples of parameter values. The Bayesian calibration resulted in realistic reproduction of the key temporal patterns of the system, offered insights into the degree of information the data contain about model inputs, and also allowed the quantification of the dependence structure among the parameter estimates. Finally, our study uses two synthetic datasets to examine the ability of the updated model to provide estimates of predictive uncertainty for water quality variables of environmental management interest.

  2. Uncertainty quantification analysis of the dynamics of an electrostatically actuated microelectromechanical switch model

    NASA Astrophysics Data System (ADS)

    Snow, Michael G.; Bajaj, Anil K.

    2015-08-01

    This work presents an uncertainty quantification (UQ) analysis of a comprehensive model for an electrostatically actuated microelectromechanical system (MEMS) switch. The goal is to elucidate the effects of parameter variations on certain key performance characteristics of the switch. A sufficiently detailed model of the electrostatically actuated switch in the basic configuration of a clamped-clamped beam is developed. This multi-physics model accounts for various physical effects, including the electrostatic fringing field, finite length of electrodes, squeeze film damping, and contact between the beam and the dielectric layer. The performance characteristics of immediate interest are the static and dynamic pull-in voltages for the switch. Numerical approaches for evaluating these characteristics are developed and described. Using Latin Hypercube Sampling and other sampling methods, the model is evaluated to find these performance characteristics when variability in the model's geometric and physical parameters is specified. Response surfaces of these results are constructed via a Multivariate Adaptive Regression Splines (MARS) technique. Using a Direct Simulation Monte Carlo (DSMC) technique on these response surfaces gives smooth probability density functions (PDFs) of the outputs characteristics when input probability characteristics are specified. The relative variation in the two pull-in voltages due to each of the input parameters is used to determine the critical parameters.

  3. Hierarchical Bayesian Modeling of Fluid-Induced Seismicity

    NASA Astrophysics Data System (ADS)

    Broccardo, M.; Mignan, A.; Wiemer, S.; Stojadinovic, B.; Giardini, D.

    2017-11-01

    In this study, we present a Bayesian hierarchical framework to model fluid-induced seismicity. The framework is based on a nonhomogeneous Poisson process with a fluid-induced seismicity rate proportional to the rate of injected fluid. The fluid-induced seismicity rate model depends upon a set of physically meaningful parameters and has been validated for six fluid-induced case studies. In line with the vision of hierarchical Bayesian modeling, the rate parameters are considered as random variables. We develop both the Bayesian inference and updating rules, which are used to develop a probabilistic forecasting model. We tested the Basel 2006 fluid-induced seismic case study to prove that the hierarchical Bayesian model offers a suitable framework to coherently encode both epistemic uncertainty and aleatory variability. Moreover, it provides a robust and consistent short-term seismic forecasting model suitable for online risk quantification and mitigation.

  4. Data-free and data-driven spectral perturbations for RANS UQ

    NASA Astrophysics Data System (ADS)

    Edeling, Wouter; Mishra, Aashwin; Iaccarino, Gianluca

    2017-11-01

    Despite recent developments in high-fidelity turbulent flow simulations, RANS modeling is still vastly used by industry, due to its inherent low cost. Since accuracy is a concern in RANS modeling, model-form UQ is an essential tool for assessing the impacts of this uncertainty on quantities of interest. Applying the spectral decomposition to the modeled Reynolds-Stress Tensor (RST) allows for the introduction of decoupled perturbations into the baseline intensity (kinetic energy), shape (eigenvalues), and orientation (eigenvectors). This constitutes a natural methodology to evaluate the model form uncertainty associated to different aspects of RST modeling. In a predictive setting, one frequently encounters an absence of any relevant reference data. To make data-free predictions with quantified uncertainty we employ physical bounds to a-priori define maximum spectral perturbations. When propagated, these perturbations yield intervals of engineering utility. High-fidelity data opens up the possibility of inferring a distribution of uncertainty, by means of various data-driven machine-learning techniques. We will demonstrate our framework on a number of flow problems where RANS models are prone to failure. This research was partially supported by the Defense Advanced Research Projects Agency under the Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) project (technical monitor: Dr Fariba Fahroo), and the DOE PSAAP-II program.

  5. Quantification and propagation of disciplinary uncertainty via Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Mantis, George Constantine

    2002-08-01

    Several needs exist in the military, commercial, and civil sectors for new hypersonic systems. These needs remain unfulfilled, due in part to the uncertainty encountered in designing these systems. This uncertainty takes a number of forms, including disciplinary uncertainty, that which is inherent in the analytical tools utilized during the design process. Yet, few efforts to date empower the designer with the means to account for this uncertainty within the disciplinary analyses. In the current state-of-the-art in design, the effects of this unquantifiable uncertainty significantly increase the risks associated with new design efforts. Typically, the risk proves too great to allow a given design to proceed beyond the conceptual stage. To that end, the research encompasses the formulation and validation of a new design method, a systematic process for probabilistically assessing the impact of disciplinary uncertainty. The method implements Bayesian Statistics theory to quantify this source of uncertainty, and propagate its effects to the vehicle system level. Comparison of analytical and physical data for existing systems, modeled a priori in the given analysis tools, leads to quantification of uncertainty in those tools' calculation of discipline-level metrics. Then, after exploration of the new vehicle's design space, the quantified uncertainty is propagated probabilistically through the design space. This ultimately results in the assessment of the impact of disciplinary uncertainty on the confidence in the design solution: the final shape and variability of the probability functions defining the vehicle's system-level metrics. Although motivated by the hypersonic regime, the proposed treatment of uncertainty applies to any class of aerospace vehicle, just as the problem itself affects the design process of any vehicle. A number of computer programs comprise the environment constructed for the implementation of this work. Application to a single-stage-to-orbit (SSTO) reusable launch vehicle concept, developed by the NASA Langley Research Center under the Space Launch Initiative, provides the validation case for this work, with the focus placed on economics, aerothermodynamics, propulsion, and structures metrics. (Abstract shortened by UMI.)

  6. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    NASA Astrophysics Data System (ADS)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  7. Noise Propagation and Uncertainty Quantification in Hybrid Multiphysics Models: Initiation and Reaction Propagation in Energetic Materials

    DTIC Science & Technology

    2016-05-23

    general model for heterogeneous granular media under compaction and (ii) the lack of a reliable multiscale discrete -to-continuum framework for...dynamics. These include a continuum- discrete model of heat dissipation/diffusion and a continuum- discrete model of compaction of a granular material with...the lack of a general model for het- erogeneous granular media under compac- tion and (ii) the lack of a reliable multi- scale discrete -to-continuum

  8. Quantification of downscaled precipitation uncertainties via Bayesian inference

    NASA Astrophysics Data System (ADS)

    Nury, A. H.; Sharma, A.; Marshall, L. A.

    2017-12-01

    Prediction of precipitation from global climate model (GCM) outputs remains critical to decision-making in water-stressed regions. In this regard, downscaling of GCM output has been a useful tool for analysing future hydro-climatological states. Several downscaling approaches have been developed for precipitation downscaling, including those using dynamical or statistical downscaling methods. Frequently, outputs from dynamical downscaling are not readily transferable across regions for significant methodical and computational difficulties. Statistical downscaling approaches provide a flexible and efficient alternative, providing hydro-climatological outputs across multiple temporal and spatial scales in many locations. However these approaches are subject to significant uncertainty, arising due to uncertainty in the downscaled model parameters and in the use of different reanalysis products for inferring appropriate model parameters. Consequently, these will affect the performance of simulation in catchment scale. This study develops a Bayesian framework for modelling downscaled daily precipitation from GCM outputs. This study aims to introduce uncertainties in downscaling evaluating reanalysis datasets against observational rainfall data over Australia. In this research a consistent technique for quantifying downscaling uncertainties by means of Bayesian downscaling frame work has been proposed. The results suggest that there are differences in downscaled precipitation occurrences and extremes.

  9. Invited Article: Concepts and tools for the evaluation of measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio; Iyer, Hari K.

    2017-01-01

    Measurements involve comparisons of measured values with reference values traceable to measurement standards and are made to support decision-making. While the conventional definition of measurement focuses on quantitative properties (including ordinal properties), we adopt a broader view and entertain the possibility of regarding qualitative properties also as legitimate targets for measurement. A measurement result comprises the following: (i) a value that has been assigned to a property based on information derived from an experiment or computation, possibly also including information derived from other sources, and (ii) a characterization of the margin of doubt that remains about the true value of the property after taking that information into account. Measurement uncertainty is this margin of doubt, and it can be characterized by a probability distribution on the set of possible values of the property of interest. Mathematical or statistical models enable the quantification of measurement uncertainty and underlie the varied collection of methods available for uncertainty evaluation. Some of these methods have been in use for over a century (for example, as introduced by Gauss for the combination of mutually inconsistent observations or for the propagation of "errors"), while others are of fairly recent vintage (for example, Monte Carlo methods including those that involve Markov Chain Monte Carlo sampling). This contribution reviews the concepts, models, methods, and computations that are commonly used for the evaluation of measurement uncertainty, and illustrates their application in realistic examples drawn from multiple areas of science and technology, aiming to serve as a general, widely accessible reference.

  10. Progress of Aircraft System Noise Assessment with Uncertainty Quantification for the Environmentally Responsible Aviation Project

    NASA Technical Reports Server (NTRS)

    Thomas, Russell H.; Burley, Casey L.; Guo, Yueping

    2016-01-01

    Aircraft system noise predictions have been performed for NASA modeled hybrid wing body aircraft advanced concepts with 2025 entry-into-service technology assumptions. The system noise predictions developed over a period from 2009 to 2016 as a result of improved modeling of the aircraft concepts, design changes, technology development, flight path modeling, and the use of extensive integrated system level experimental data. In addition, the system noise prediction models and process have been improved in many ways. An additional process is developed here for quantifying the uncertainty with a 95% confidence level. This uncertainty applies only to the aircraft system noise prediction process. For three points in time during this period, the vehicle designs, technologies, and noise prediction process are documented. For each of the three predictions, and with the information available at each of those points in time, the uncertainty is quantified using the direct Monte Carlo method with 10,000 simulations. For the prediction of cumulative noise of an advanced aircraft at the conceptual level of design, the total uncertainty band has been reduced from 12.2 to 9.6 EPNL dB. A value of 3.6 EPNL dB is proposed as the lower limit of uncertainty possible for the cumulative system noise prediction of an advanced aircraft concept.

  11. Damage severity assessment in wind turbine blade laboratory model through fuzzy finite element model updating

    NASA Astrophysics Data System (ADS)

    Turnbull, Heather; Omenzetter, Piotr

    2017-04-01

    The recent shift towards development of clean, sustainable energy sources has provided a new challenge in terms of structural safety and reliability: with aging, manufacturing defects, harsh environmental and operational conditions, and extreme events such as lightning strikes wind turbines can become damaged resulting in production losses and environmental degradation. To monitor the current structural state of the turbine, structural health monitoring (SHM) techniques would be beneficial. Physics based SHM in the form of calibration of a finite element model (FEMs) by inverse techniques is adopted in this research. Fuzzy finite element model updating (FFEMU) techniques for damage severity assessment of a small-scale wind turbine blade are discussed and implemented. The main advantage is the ability of FFEMU to account in a simple way for uncertainty within the problem of model updating. Uncertainty quantification techniques, such as fuzzy sets, enable a convenient mathematical representation of the various uncertainties. Experimental frequencies obtained from modal analysis on a small-scale wind turbine blade were described by fuzzy numbers to model measurement uncertainty. During this investigation, damage severity estimation was investigated through addition of small masses of varying magnitude to the trailing edge of the structure. This structural modification, intended to be in lieu of damage, enabled non-destructive experimental simulation of structural change. A numerical model was constructed with multiple variable additional masses simulated upon the blades trailing edge and used as updating parameters. Objective functions for updating were constructed and minimized using both particle swarm optimization algorithm and firefly algorithm. FFEMU was able to obtain a prediction of baseline material properties of the blade whilst also successfully predicting, with sufficient accuracy, a larger magnitude of structural alteration and its location.

  12. Combining Human and Machine Intelligence to Derive Agents' Behavioral Rules for Groundwater Irrigation

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Quinn, C.; Cai, X.

    2015-12-01

    One major challenge of agent-based modeling is to derive agents' behavioral rules due to behavioral uncertainty and data scarcity. This study proposes a new approach to combine a data-driven modeling based on the directed information (i.e., machine intelligence) with expert domain knowledge (i.e., human intelligence) to derive the behavioral rules of agents considering behavioral uncertainty. A directed information graph algorithm is applied to identifying the causal relationships between agents' decisions (i.e., groundwater irrigation depth) and time-series of environmental, socio-economical and institutional factors. A case study is conducted for the High Plains aquifer hydrological observatory (HO) area, U.S. Preliminary results show that four factors, corn price (CP), underlying groundwater level (GWL), monthly mean temperature (T) and precipitation (P) have causal influences on agents' decisions on groundwater irrigation depth (GWID) to various extents. Based on the similarity of the directed information graph for each agent, five clusters of graphs are further identified to represent all the agents' behaviors in the study area as shown in Figure 1. Using these five representative graphs, agents' monthly optimal groundwater pumping rates are derived through the probabilistic inference. Such data-driven relationships and probabilistic quantifications are then coupled with a physically-based groundwater model to investigate the interactions between agents' pumping behaviors and the underlying groundwater system in the context of coupled human and natural systems.

  13. Uncertainty Quantification of Water Quality in Tamsui River in Taiwan

    NASA Astrophysics Data System (ADS)

    Kao, D.; Tsai, C.

    2017-12-01

    In Taiwan, modeling of non-point source pollution is unavoidably associated with uncertainty. The main purpose of this research is to better understand water contamination in the metropolitan Taipei area, and also to provide a new analysis method for government or companies to establish related control and design measures. In this research, three methods are utilized to carry out the uncertainty analysis step by step with Mike 21, which is widely used for hydro-dynamics and water quality modeling, and the study area is focused on Tamsui river watershed. First, a sensitivity analysis is conducted which can be used to rank the order of influential parameters and variables such as Dissolved Oxygen, Nitrate, Ammonia and Phosphorous. Then we use the First-order error method (FOEA) to determine the number of parameters that could significantly affect the variability of simulation results. Finally, a state-of-the-art method for uncertainty analysis called the Perturbance moment method (PMM) is applied in this research, which is more efficient than the Monte-Carlo simulation (MCS). For MCS, the calculations may become cumbersome when involving multiple uncertain parameters and variables. For PMM, three representative points are used for each random variable, and the statistical moments (e.g., mean value, standard deviation) for the output can be presented by the representative points and perturbance moments based on the parallel axis theorem. With the assumption of the independent parameters and variables, calculation time is significantly reduced for PMM as opposed to MCS for a comparable modeling accuracy.

  14. Quantification of CO emissions from the city of Madrid using MOPITT satellite retrievals and WRF simulations

    NASA Astrophysics Data System (ADS)

    Dekker, Iris N.; Houweling, Sander; Aben, Ilse; Röckmann, Thomas; Krol, Maarten; Martínez-Alonso, Sara; Deeter, Merritt N.; Worden, Helen M.

    2017-12-01

    The growth of mega-cities leads to air quality problems directly affecting the citizens. Satellite measurements are becoming of higher quality and quantity, which leads to more accurate satellite retrievals of enhanced air pollutant concentrations over large cities. In this paper, we compare and discuss both an existing and a new method for estimating urban-scale trends in CO emissions using multi-year retrievals from the MOPITT satellite instrument. The first method is mainly based on satellite data, and has the advantage of fewer assumptions, but also comes with uncertainties and limitations as shown in this paper. To improve the reliability of urban-to-regional scale emission trend estimation, we simulate MOPITT retrievals using the Weather Research and Forecast model with chemistry core (WRF-Chem). The difference between model and retrieval is used to optimize CO emissions in WRF-Chem, focusing on the city of Madrid, Spain. This method has the advantage over the existing method in that it allows both a trend analysis of CO concentrations and a quantification of CO emissions. Our analysis confirms that MOPITT is capable of detecting CO enhancements over Madrid, although significant differences remain between the yearly averaged model output and satellite measurements (R2 = 0.75) over the city. After optimization, we find Madrid CO emissions to be lower by 48 % for 2002 and by 17 % for 2006 compared with the EdgarV4.2 emission inventory. The MOPITT-derived emission adjustments lead to better agreement with the European emission inventory TNO-MAC-III for both years. This suggests that the downward trend in CO emissions over Madrid is overestimated in EdgarV4.2 and more realistically represented in TNO-MACC-III. However, our satellite and model based emission estimates have large uncertainties, around 20 % for 2002 and 50 % for 2006.

  15. Analytical probabilistic modeling of RBE-weighted dose for ion therapy.

    PubMed

    Wieser, H P; Hennig, P; Wahl, N; Bangert, M

    2017-11-10

    Particle therapy is especially prone to uncertainties. This issue is usually addressed with uncertainty quantification and minimization techniques based on scenario sampling. For proton therapy, however, it was recently shown that it is also possible to use closed-form computations based on analytical probabilistic modeling (APM) for this purpose. APM yields unique features compared to sampling-based approaches, motivating further research in this context. This paper demonstrates the application of APM for intensity-modulated carbon ion therapy to quantify the influence of setup and range uncertainties on the RBE-weighted dose. In particular, we derive analytical forms for the nonlinear computations of the expectation value and variance of the RBE-weighted dose by propagating linearly correlated Gaussian input uncertainties through a pencil beam dose calculation algorithm. Both exact and approximation formulas are presented for the expectation value and variance of the RBE-weighted dose and are subsequently studied in-depth for a one-dimensional carbon ion spread-out Bragg peak. With V and B being the number of voxels and pencil beams, respectively, the proposed approximations induce only a marginal loss of accuracy while lowering the computational complexity from order [Formula: see text] to [Formula: see text] for the expectation value and from [Formula: see text] to [Formula: see text] for the variance of the RBE-weighted dose. Moreover, we evaluated the approximated calculation of the expectation value and standard deviation of the RBE-weighted dose in combination with a probabilistic effect-based optimization on three patient cases considering carbon ions as radiation modality against sampled references. The resulting global γ-pass rates (2 mm,2%) are [Formula: see text]99.15% for the expectation value and [Formula: see text]94.95% for the standard deviation of the RBE-weighted dose, respectively. We applied the derived analytical model to carbon ion treatment planning, although the concept is in general applicable to other ion species considering a variable RBE.

  16. Analytical probabilistic modeling of RBE-weighted dose for ion therapy

    NASA Astrophysics Data System (ADS)

    Wieser, H. P.; Hennig, P.; Wahl, N.; Bangert, M.

    2017-12-01

    Particle therapy is especially prone to uncertainties. This issue is usually addressed with uncertainty quantification and minimization techniques based on scenario sampling. For proton therapy, however, it was recently shown that it is also possible to use closed-form computations based on analytical probabilistic modeling (APM) for this purpose. APM yields unique features compared to sampling-based approaches, motivating further research in this context. This paper demonstrates the application of APM for intensity-modulated carbon ion therapy to quantify the influence of setup and range uncertainties on the RBE-weighted dose. In particular, we derive analytical forms for the nonlinear computations of the expectation value and variance of the RBE-weighted dose by propagating linearly correlated Gaussian input uncertainties through a pencil beam dose calculation algorithm. Both exact and approximation formulas are presented for the expectation value and variance of the RBE-weighted dose and are subsequently studied in-depth for a one-dimensional carbon ion spread-out Bragg peak. With V and B being the number of voxels and pencil beams, respectively, the proposed approximations induce only a marginal loss of accuracy while lowering the computational complexity from order O(V × B^2) to O(V × B) for the expectation value and from O(V × B^4) to O(V × B^2) for the variance of the RBE-weighted dose. Moreover, we evaluated the approximated calculation of the expectation value and standard deviation of the RBE-weighted dose in combination with a probabilistic effect-based optimization on three patient cases considering carbon ions as radiation modality against sampled references. The resulting global γ-pass rates (2 mm,2%) are > 99.15% for the expectation value and > 94.95% for the standard deviation of the RBE-weighted dose, respectively. We applied the derived analytical model to carbon ion treatment planning, although the concept is in general applicable to other ion species considering a variable RBE.

  17. Topology optimization under stochastic stiffness

    NASA Astrophysics Data System (ADS)

    Asadpoure, Alireza

    Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations for the response quantities allow for efficient and accurate calculation of sensitivities of response statistics with respect to the design variables. The proposed methods are shown to be successful at generating robust optimal topologies. Examples from topology optimization in continuum and discrete domains (truss structures) under uncertainty are presented. It is also shown that proposed methods lead to significant computational savings when compared to Monte Carlo-based optimization which involve multiple formations and inversions of the global stiffness matrix and that results obtained from the proposed method are in excellent agreement with those obtained from a Monte Carlo-based optimization algorithm.

  18. Inverse regression-based uncertainty quantification algorithms for high-dimensional models: Theory and practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan; Lin, Guang; Li, Bing

    2016-09-01

    A well-known challenge in uncertainty quantification (UQ) is the "curse of dimensionality". However, many high-dimensional UQ problems are essentially low-dimensional, because the randomness of the quantity of interest (QoI) is caused only by uncertain parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace. Motivated by this observation, we propose and demonstrate in this paper an inverse regression-based UQ approach (IRUQ) for high-dimensional problems. Specifically, we use an inverse regression procedure to estimate the SDR subspace and then convert the original problem to a low-dimensional one, which can be efficiently solved by building a response surface model such as a polynomial chaos expansion. The novelty and advantages of the proposed approach is seen in its computational efficiency and practicality. Comparing with Monte Carlo, the traditionally preferred approach for high-dimensional UQ, IRUQ with a comparable cost generally gives much more accurate solutions even for high-dimensional problems, and even when the dimension reduction is not exactly sufficient. Theoretically, IRUQ is proved to converge twice as fast as the approach it uses seeking the SDR subspace. For example, while a sliced inverse regression method converges to the SDR subspace at the rate ofmore » $$O(n^{-1/2})$$, the corresponding IRUQ converges at $$O(n^{-1})$$. IRUQ also provides several desired conveniences in practice. It is non-intrusive, requiring only a simulator to generate realizations of the QoI, and there is no need to compute the high-dimensional gradient of the QoI. Finally, error bars can be derived for the estimation results reported by IRUQ.« less

  19. Global Sensitivity Analysis of Environmental Systems via Multiple Indices based on Statistical Moments of Model Outputs

    NASA Astrophysics Data System (ADS)

    Guadagnini, A.; Riva, M.; Dell'Oca, A.

    2017-12-01

    We propose to ground sensitivity of uncertain parameters of environmental models on a set of indices based on the main (statistical) moments, i.e., mean, variance, skewness and kurtosis, of the probability density function (pdf) of a target model output. This enables us to perform Global Sensitivity Analysis (GSA) of a model in terms of multiple statistical moments and yields a quantification of the impact of model parameters on features driving the shape of the pdf of model output. Our GSA approach includes the possibility of being coupled with the construction of a reduced complexity model that allows approximating the full model response at a reduced computational cost. We demonstrate our approach through a variety of test cases. These include a commonly used analytical benchmark, a simplified model representing pumping in a coastal aquifer, a laboratory-scale tracer experiment, and the migration of fracturing fluid through a naturally fractured reservoir (source) to reach an overlying formation (target). Our strategy allows discriminating the relative importance of model parameters to the four statistical moments considered. We also provide an appraisal of the error associated with the evaluation of our sensitivity metrics by replacing the original system model through the selected surrogate model. Our results suggest that one might need to construct a surrogate model with increasing level of accuracy depending on the statistical moment considered in the GSA. The methodological framework we propose can assist the development of analysis techniques targeted to model calibration, design of experiment, uncertainty quantification and risk assessment.

  20. A Surrogate-based Adaptive Sampling Approach for History Matching and Uncertainty Quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan; Zhang, Dongxiao; Lin, Guang

    A critical procedure in reservoir simulations is history matching (or data assimilation in a broader sense), which calibrates model parameters such that the simulation results are consistent with field measurements, and hence improves the credibility of the predictions given by the simulations. Often there exist non-unique combinations of parameter values that all yield the simulation results matching the measurements. For such ill-posed history matching problems, Bayesian theorem provides a theoretical foundation to represent different solutions and to quantify the uncertainty with the posterior PDF. Lacking an analytical solution in most situations, the posterior PDF may be characterized with a samplemore » of realizations, each representing a possible scenario. A novel sampling algorithm is presented here for the Bayesian solutions to history matching problems. We aim to deal with two commonly encountered issues: 1) as a result of the nonlinear input-output relationship in a reservoir model, the posterior distribution could be in a complex form, such as multimodal, which violates the Gaussian assumption required by most of the commonly used data assimilation approaches; 2) a typical sampling method requires intensive model evaluations and hence may cause unaffordable computational cost. In the developed algorithm, we use a Gaussian mixture model as the proposal distribution in the sampling process, which is simple but also flexible to approximate non-Gaussian distributions and is particularly efficient when the posterior is multimodal. Also, a Gaussian process is utilized as a surrogate model to speed up the sampling process. Furthermore, an iterative scheme of adaptive surrogate refinement and re-sampling ensures sampling accuracy while keeping the computational cost at a minimum level. The developed approach is demonstrated with an illustrative example and shows its capability in handling the above-mentioned issues. Multimodal posterior of the history matching problem is captured and are used to give a reliable production prediction with uncertainty quantification. The new algorithm reveals a great improvement in terms of computational efficiency comparing previously studied approaches for the sample problem.« less

  1. Uncertainty Quantification in Scale-Dependent Models of Flow in Porous Media: SCALE-DEPENDENT UQ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tartakovsky, A. M.; Panzeri, M.; Tartakovsky, G. D.

    Equations governing flow and transport in heterogeneous porous media are scale-dependent. We demonstrate that it is possible to identify a support scalemore » $$\\eta^*$$, such that the typically employed approximate formulations of Moment Equations (ME) yield accurate (statistical) moments of a target environmental state variable. Under these circumstances, the ME approach can be used as an alternative to the Monte Carlo (MC) method for Uncertainty Quantification in diverse fields of Earth and environmental sciences. MEs are directly satisfied by the leading moments of the quantities of interest and are defined on the same support scale as the governing stochastic partial differential equations (PDEs). Computable approximations of the otherwise exact MEs can be obtained through perturbation expansion of moments of the state variables in orders of the standard deviation of the random model parameters. As such, their convergence is guaranteed only for the standard deviation smaller than one. We demonstrate our approach in the context of steady-state groundwater flow in a porous medium with a spatially random hydraulic conductivity.« less

  2. Uncertainties in discharge projections in consequence of climate change

    NASA Astrophysics Data System (ADS)

    Liebert, J.; Düthmann, D.; Berg, P.; Feldmann, H.; Ihringer, J.; Kunstmann, H.; Merz, B.; Ott, I.; Schädler, G.; Wagner, S.

    2012-04-01

    The fourth assessment report of the IPCC summarizes possible effects of the global climate change. For Europe an increasing variability of temperature and precipitation is expected. While the increasing temperature is projected almost uniformly for Europe, for precipitation the models indicate partly heterogeneous tendencies. In order to maintain current safety-standards in the infrastructure of our various water management systems, the possible future floods discharges are very often a central question. In the planning and operation of water infrastructure systems uncertainties considerations have an important function. In times of climate change the analyses of measured historical gauge data (normally 30 - 80 years) are not sufficient enough, because even significant trends are only valid in the analyzed time period and extrapolations are exceedingly difficult. Therefore combined climate and hydrological modeling for scenario based projections become more and more popular. Regarding that adaptation measures in water infrastructure are in general very time-consuming and cost intensive qualified questions to the variability and uncertainty of model based results are important as well. The CEDIM-Project "Flood hazards in a changing climate" is focusing on both: future changes in flood discharge and assess the uncertainties that are involved in such model based future predictions. In detail the study bases on an ensemble of hydrological model (HM) simulations in 3 representative small to medium sized German river catchments (Ammer, Mulde and Ruhr). The meteorological Input bases on 2 high resolution (7 km) regional climate models (RCM) driven by 2 global climate models (GCM) for the near future (2021 - 2050) following the A1B emission scenario (SRES). Two of the catchments (Ruhr and Mulde) have sub-mountainous and one (Ammer) has alpine character. Besides analyzing the future changes in discharge in the catchments, the describing and potential quantification of the variability of the results, based on the different driving data, regionalization methods, spatial resolutions and model types, is one main goal of the study and should stay in the focus of the poster. The general result is a large variability in the discharge projection. The identified variabilities are in the annual regime mainly attributable to different causes in the used model chain (GCM-RCM-HM). In winter the global climate models (GCM) bring the main uncertainties in the future projection. In summer the main variability refers to the meteorological downscaling to the regional scale (RCM) in combination with the hydrological modeling (HM). But with an appropriate ensemble statistic are despite the large variabilities mean future tendencies detectable. The Ruhr catchment shows tendencies to future higher flood discharges and in the Ammer and Mulde catchments are no significant changes expected.

  3. Analysis of ISO NE Balancing Requirements: Uncertainty-based Secure Ranges for ISO New England Dynamic Inerchange Adjustments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Etingov, Pavel V.; Makarov, Yuri V.; Wu, Di

    The document describes detailed uncertainty quantification (UQ) methodology developed by PNNL to estimate secure ranges of potential dynamic intra-hour interchange adjustments in the ISO-NE system and provides description of the dynamic interchange adjustment (DINA) tool developed under the same contract. The overall system ramping up and down capability, spinning reserve requirements, interchange schedules, load variations and uncertainties from various sources that are relevant to the ISO-NE system are incorporated into the methodology and the tool. The DINA tool has been tested by PNNL and ISO-NE staff engineers using ISO-NE data.

  4. The need for precise and well-documented experimental data on prompt fission neutron spectra from neutron-induced fission of 239Pu

    DOE PAGES

    Neudecker, Denise; Taddeucci, Terry Nicholas; Haight, Robert Cameron; ...

    2016-01-06

    The spectrum of neutrons emitted promptly after 239Pu(n,f)—a so-called prompt fission neutron spectrum (PFNS)—is a quantity of high interest, for instance, for reactor physics and global security. However, there are only few experimental data sets available that are suitable for evaluations. In addition, some of those data sets differ by more than their 1-σ uncertainty boundaries. We present the results of MCNP studies indicating that these differences are partly caused by underestimated multiple scattering contributions, over-corrected background, and inconsistent deconvolution methods. A detailed uncertainty quantification for suitable experimental data was undertaken including these effects, and test-evaluations were performed with themore » improved uncertainty information. The test-evaluations illustrate that the inadequately estimated effects and detailed uncertainty quantification have an impact on the evaluated PFNS and associated uncertainties as well as the neutron multiplicity of selected critical assemblies. A summary of data and documentation needs to improve the quality of the experimental database is provided based on the results of simulations and test-evaluations. Furthermore, given the possibly substantial distortion of the PFNS by multiple scattering and background effects, special care should be taken to reduce these effects in future measurements, e.g., by measuring the 239Pu PFNS as a ratio to either the 235U or 252Cf PFNS.« less

  5. Iterative refinement of implicit boundary models for improved geological feature reproduction

    NASA Astrophysics Data System (ADS)

    Martin, Ryan; Boisvert, Jeff B.

    2017-12-01

    Geological domains contain non-stationary features that cannot be described by a single direction of continuity. Non-stationary estimation frameworks generate more realistic curvilinear interpretations of subsurface geometries. A radial basis function (RBF) based implicit modeling framework using domain decomposition is developed that permits introduction of locally varying orientations and magnitudes of anisotropy for boundary models to better account for the local variability of complex geological deposits. The interpolation framework is paired with a method to automatically infer the locally predominant orientations, which results in a rapid and robust iterative non-stationary boundary modeling technique that can refine locally anisotropic geological shapes automatically from the sample data. The method also permits quantification of the volumetric uncertainty associated with the boundary modeling. The methodology is demonstrated on a porphyry dataset and shows improved local geological features.

  6. Quantification of Emission Factor Uncertainty

    EPA Science Inventory

    Emissions factors are important for estimating and characterizing emissions from sources of air pollution. There is no quantitative indication of uncertainty for these emission factors, most factors do not have an adequate data set to compute uncertainty, and it is very difficult...

  7. Quantification of agricultural drought occurrence as an estimate for insurance programs

    NASA Astrophysics Data System (ADS)

    Bannayan, M.; Hoogenboom, G.

    2015-11-01

    Temporal irregularities of rainfall and drought have major impacts on rainfed cropping systems. The main goal of this study was to develop an approach for realizing drought occurrence based on local winter wheat yield loss and rainfall. The domain study included 11 counties in the state of Washington that actively grow rainfed winter wheat and an uncertainty rainfall evaluation model using daily rainfall values from 1985 to 2007. An application was developed that calculates a rainfall index for insurance that was then used to determine the drought intensity for each study year and for each study site. Evaluation of the drought intensity showed that both the 1999-2000 and 2000-2001 growing seasons were stressful years for most of the study locations, while the 2005-2006 and the 2006-2007 growing seasons experienced the lowest drought intensity for all locations. Our results are consistent with local extension reports of drought occurrences. Quantification of drought intensity based on this application could provide a convenient index for insurance companies for determining the effect of rainfall and drought on crop yield loss under the varying weather conditions of semi-arid regions.

  8. BN-FLEMOps pluvial - A probabilistic multi-variable loss estimation model for pluvial floods

    NASA Astrophysics Data System (ADS)

    Roezer, V.; Kreibich, H.; Schroeter, K.; Doss-Gollin, J.; Lall, U.; Merz, B.

    2017-12-01

    Pluvial flood events, such as in Copenhagen (Denmark) in 2011, Beijing (China) in 2012 or Houston (USA) in 2016, have caused severe losses to urban dwellings in recent years. These floods are caused by storm events with high rainfall rates well above the design levels of urban drainage systems, which lead to inundation of streets and buildings. A projected increase in frequency and intensity of heavy rainfall events in many areas and an ongoing urbanization may increase pluvial flood losses in the future. For an efficient risk assessment and adaptation to pluvial floods, a quantification of the flood risk is needed. Few loss models have been developed particularly for pluvial floods. These models usually use simple waterlevel- or rainfall-loss functions and come with very high uncertainties. To account for these uncertainties and improve the loss estimation, we present a probabilistic multi-variable loss estimation model for pluvial floods based on empirical data. The model was developed in a two-step process using a machine learning approach and a comprehensive database comprising 783 records of direct building and content damage of private households. The data was gathered through surveys after four different pluvial flood events in Germany between 2005 and 2014. In a first step, linear and non-linear machine learning algorithms, such as tree-based and penalized regression models were used to identify the most important loss influencing factors among a set of 55 candidate variables. These variables comprise hydrological and hydraulic aspects, early warning, precaution, building characteristics and the socio-economic status of the household. In a second step, the most important loss influencing variables were used to derive a probabilistic multi-variable pluvial flood loss estimation model based on Bayesian Networks. Two different networks were tested: a score-based network learned from the data and a network based on expert knowledge. Loss predictions are made through Bayesian inference using Markov chain Monte Carlo (MCMC) sampling. With the ability to cope with incomplete information and use expert knowledge, as well as inherently providing quantitative uncertainty information, it is shown that loss models based on BNs are superior to deterministic approaches for pluvial flood risk assessment.

  9. Uncertainty Quantification in Remaining Useful Life of Aerospace Components using State Space Models and Inverse FORM

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Goebel, Kai

    2013-01-01

    This paper investigates the use of the inverse first-order reliability method (inverse- FORM) to quantify the uncertainty in the remaining useful life (RUL) of aerospace components. The prediction of remaining useful life is an integral part of system health prognosis, and directly helps in online health monitoring and decision-making. However, the prediction of remaining useful life is affected by several sources of uncertainty, and therefore it is necessary to quantify the uncertainty in the remaining useful life prediction. While system parameter uncertainty and physical variability can be easily included in inverse-FORM, this paper extends the methodology to include: (1) future loading uncertainty, (2) process noise; and (3) uncertainty in the state estimate. The inverse-FORM method has been used in this paper to (1) quickly obtain probability bounds on the remaining useful life prediction; and (2) calculate the entire probability distribution of remaining useful life prediction, and the results are verified against Monte Carlo sampling. The proposed methodology is illustrated using a numerical example.

  10. IAEA Coordinated Research Project on HTGR Reactor Physics, Thermal-hydraulics and Depletion Uncertainty Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strydom, Gerhard; Bostelmann, F.

    The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of HTGR design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The predictive capability of coupled neutronics/thermal-hydraulics and depletion simulations for reactor design and safety analysis can be assessed with sensitivity analysis (SA) and uncertainty analysis (UA) methods. Uncertainty originates from errors in physical data, manufacturing uncertainties, modelling and computational algorithms. (The interested reader is referred to the large body of published SA and UA literature for a more complete overview of the various types of uncertainties, methodologies and results obtained).more » SA is helpful for ranking the various sources of uncertainty and error in the results of core analyses. SA and UA are required to address cost, safety, and licensing needs and should be applied to all aspects of reactor multi-physics simulation. SA and UA can guide experimental, modelling, and algorithm research and development. Current SA and UA rely either on derivative-based methods such as stochastic sampling methods or on generalized perturbation theory to obtain sensitivity coefficients. Neither approach addresses all needs. In order to benefit from recent advances in modelling and simulation and the availability of new covariance data (nuclear data uncertainties) extensive sensitivity and uncertainty studies are needed for quantification of the impact of different sources of uncertainties on the design and safety parameters of HTGRs. Only a parallel effort in advanced simulation and in nuclear data improvement will be able to provide designers with more robust and well validated calculation tools to meet design target accuracies. In February 2009, the Technical Working Group on Gas-Cooled Reactors (TWG-GCR) of the International Atomic Energy Agency (IAEA) recommended that the proposed Coordinated Research Program (CRP) on the HTGR Uncertainty Analysis in Modelling (UAM) be implemented. This CRP is a continuation of the previous IAEA and Organization for Economic Co-operation and Development (OECD)/Nuclear Energy Agency (NEA) international activities on Verification and Validation (V&V) of available analytical capabilities for HTGR simulation for design and safety evaluations. Within the framework of these activities different numerical and experimental benchmark problems were performed and insight was gained about specific physics phenomena and the adequacy of analysis methods.« less

  11. Bayesian models for comparative analysis integrating phylogenetic uncertainty.

    PubMed

    de Villemereuil, Pierre; Wells, Jessie A; Edwards, Robert D; Blomberg, Simon P

    2012-06-28

    Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language.

  12. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    PubMed Central

    2012-01-01

    Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language. PMID:22741602

  13. The Prediction-Focused Approach: An opportunity for hydrogeophysical data integration and interpretation

    NASA Astrophysics Data System (ADS)

    Hermans, Thomas; Nguyen, Frédéric; Klepikova, Maria; Dassargues, Alain; Caers, Jef

    2017-04-01

    Hydrogeophysics is an interdisciplinary field of sciences aiming at a better understanding of subsurface hydrological processes. If geophysical surveys have been successfully used to qualitatively characterize the subsurface, two important challenges remain for a better quantification of hydrological processes: (1) the inversion of geophysical data and (2) their integration in hydrological subsurface models. The classical inversion approach using regularization suffers from spatially and temporally varying resolution and yields geologically unrealistic solutions without uncertainty quantification, making their utilization for hydrogeological calibration less consistent. More advanced techniques such as coupled inversion allow for a direct use of geophysical data for conditioning groundwater and solute transport model calibration. However, the technique is difficult to apply in complex cases and remains computationally demanding to estimate uncertainty. In a recent study, we investigate a prediction-focused approach (PFA) to directly estimate subsurface physical properties from geophysical data, circumventing the need for classic inversions. In PFA, we seek a direct relationship between the data and the subsurface variables we want to predict (the forecast). This relationship is obtained through a prior set of subsurface models for which both data and forecast are computed. A direct relationship can often be derived through dimension reduction techniques. PFA offers a framework for both hydrogeophysical "inversion" and hydrogeophysical data integration. For hydrogeophysical "inversion", the considered forecast variable is the subsurface variable, such as the salinity. An ensemble of possible solutions is generated, allowing uncertainty quantification. For hydrogeophysical data integration, the forecast variable becomes the prediction we want to make with our subsurface models, such as the concentration of contaminant in a drinking water production well. Geophysical and hydrological data are combined to derive a direct relationship between data and forecast. We illustrate the process for the design of an aquifer thermal energy storage (ATES) system. An ATES system can theoretically recover in winter the heat stored in the aquifer during summer. In practice, the energy efficiency is often lower than expected due to spatial heterogeneity of hydraulic properties combined to a non-favorable hydrogeological gradient. A proper design of ATES systems should consider the uncertainty of the prediction related to those parameters. With a global sensitivity analysis, we identify sensitive parameters for heat storage prediction and validate the use of a short term heat tracing experiment monitored with geophysics to generate informative data. First, we illustrate how PFA can be used to successfully derive the distribution of temperature in the aquifer from ERT during the heat tracing experiment. Then, we successfully integrate the geophysical data to predict medium-term heat storage in the aquifer using PFA. The result is a full quantification of the posterior distribution of the prediction conditioned to observed data in a relatively limited time budget.

  14. Uncertainty Estimation Improves Energy Measurement and Verification Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walter, Travis; Price, Phillip N.; Sohn, Michael D.

    2014-05-14

    Implementing energy conservation measures in buildings can reduce energy costs and environmental impacts, but such measures cost money to implement so intelligent investment strategies require the ability to quantify the energy savings by comparing actual energy used to how much energy would have been used in absence of the conservation measures (known as the baseline energy use). Methods exist for predicting baseline energy use, but a limitation of most statistical methods reported in the literature is inadequate quantification of the uncertainty in baseline energy use predictions. However, estimation of uncertainty is essential for weighing the risks of investing in retrofits.more » Most commercial buildings have, or soon will have, electricity meters capable of providing data at short time intervals. These data provide new opportunities to quantify uncertainty in baseline predictions, and to do so after shorter measurement durations than are traditionally used. In this paper, we show that uncertainty estimation provides greater measurement and verification (M&V) information and helps to overcome some of the difficulties with deciding how much data is needed to develop baseline models and to confirm energy savings. We also show that cross-validation is an effective method for computing uncertainty. In so doing, we extend a simple regression-based method of predicting energy use using short-interval meter data. We demonstrate the methods by predicting energy use in 17 real commercial buildings. We discuss the benefits of uncertainty estimates which can provide actionable decision making information for investing in energy conservation measures.« less

  15. Global Bedload Flux Modeling and Analysis in Large Rivers

    NASA Astrophysics Data System (ADS)

    Islam, M. T.; Cohen, S.; Syvitski, J. P.

    2017-12-01

    Proper sediment transport quantification has long been an area of interest for both scientists and engineers in the fields of geomorphology, and management of rivers and coastal waters. Bedload flux is important for monitoring water quality and for sustainable development of coastal and marine bioservices. Bedload measurements, especially for large rivers, is extremely scarce across time, and many rivers have never been monitored. Bedload measurements in rivers, is particularly acute in developing countries where changes in sediment yields is high. The paucity of bedload measurements is the result of 1) the nature of the problem (large spatial and temporal uncertainties), and 2) field costs including the time-consuming nature of the measurement procedures (repeated bedform migration tracking, bedload samplers). Here we present a first of its kind methodology for calculating bedload in large global rivers (basins are >1,000 km. Evaluation of model skill is based on 113 bedload measurements. The model predictions are compared with an empirical model developed from the observational dataset in an attempt to evaluate the differences between a physically-based numerical model and a lumped relationship between bedload flux and fluvial and basin parameters (e.g., discharge, drainage area, lithology). The initial study success opens up various applications to global fluvial geomorphology (e.g. including the relationship between suspended sediment (wash load) and bedload). Simulated results with known uncertainties offers a new research product as a valuable resource for the whole scientific community.

  16. Streamflow loss quantification for groundwater flow modeling using a wading-rod-mounted acoustic Doppler current profiler in a headwater stream

    NASA Astrophysics Data System (ADS)

    Pflügl, Christian; Hoehn, Philipp; Hofmann, Thilo

    2017-04-01

    Irrespective of the availability of various field measurement and modeling approaches, the quantification of interactions between surface water and groundwater systems remains associated with high uncertainty. Such uncertainties on stream-aquifer interaction have a high potential to misinterpret the local water budget and water quality significantly. Due to typically considerable temporal variation of stream discharge rates, it is desirable for the measurement of streamflow to reduce the measuring duration while reducing uncertainty. Streamflow measurements, according to the velocity-area method, have been performed along reaches of a losing-disconnected, subalpine headwater stream using a 2-dimensional, wading-rod-mounted acoustic Doppler current profiler (ADCP). The method was chosen, with stream morphology not allowing for boat-mounted setups, to reduce uncertainty compared to conventional, single-point streamflow measurements of similar measurement duration. Reach-averaged stream loss rates were subsequently quantified between 12 cross sections. They enabled the delineation of strongly infiltrating stream reaches and their differentiation from insignificantly infiltrating reaches. Furthermore, a total of 10 near-stream observation wells were constructed and/or equipped with pressure and temperature loggers. The time series of near-stream groundwater temperature data were cross-correlated with stream temperature time series to yield supportive qualitative information on the delineation of infiltrating reaches. Subsequently, as a reference parameterization, the hydraulic conductivity and specific yield of a numerical, steady-state model of groundwater flow, in the unconfined glaciofluvial aquifer adjacent to the stream, were inversely determined incorporating the inferred stream loss rates. Applying synthetic sets of infiltration rates, resembling increasing levels of uncertainty associated with single-point streamflow measurements of comparable duration, the same inversion procedure was run. The volume-weighted mean of the respective parameter distribution within 200 m of stream periphery deviated increasingly from the reference parameterization at increasing deviation of infiltration rates.

  17. Quantification of uncertainty in aerosol optical thickness retrieval arising from aerosol microphysical model and other sources, applied to Ozone Monitoring Instrument (OMI) measurements

    NASA Astrophysics Data System (ADS)

    Määttä, A.; Laine, M.; Tamminen, J.; Veefkind, J. P.

    2014-05-01

    Satellite instruments are nowadays successfully utilised for measuring atmospheric aerosol in many applications as well as in research. Therefore, there is a growing need for rigorous error characterisation of the measurements. Here, we introduce a methodology for quantifying the uncertainty in the retrieval of aerosol optical thickness (AOT). In particular, we concentrate on two aspects: uncertainty due to aerosol microphysical model selection and uncertainty due to imperfect forward modelling. We apply the introduced methodology for aerosol optical thickness retrieval of the Ozone Monitoring Instrument (OMI) on board NASA's Earth Observing System (EOS) Aura satellite, launched in 2004. We apply statistical methodologies that improve the uncertainty estimates of the aerosol optical thickness retrieval by propagating aerosol microphysical model selection and forward model error more realistically. For the microphysical model selection problem, we utilise Bayesian model selection and model averaging methods. Gaussian processes are utilised to characterise the smooth systematic discrepancies between the measured and modelled reflectances (i.e. residuals). The spectral correlation is composed empirically by exploring a set of residuals. The operational OMI multi-wavelength aerosol retrieval algorithm OMAERO is used for cloud-free, over-land pixels of the OMI instrument with the additional Bayesian model selection and model discrepancy techniques introduced here. The method and improved uncertainty characterisation is demonstrated by several examples with different aerosol properties: weakly absorbing aerosols, forest fires over Greece and Russia, and Sahara desert dust. The statistical methodology presented is general; it is not restricted to this particular satellite retrieval application.

  18. A Review of Hypersonics Aerodynamics, Aerothermodynamics and Plasmadynamics Activities within NASA's Fundamental Aeronautics Program

    NASA Technical Reports Server (NTRS)

    Salas, Manuel D.

    2007-01-01

    The research program of the aerodynamics, aerothermodynamics and plasmadynamics discipline of NASA's Hypersonic Project is reviewed. Details are provided for each of its three components: 1) development of physics-based models of non-equilibrium chemistry, surface catalytic effects, turbulence, transition and radiation; 2) development of advanced simulation tools to enable increased spatial and time accuracy, increased geometrical complexity, grid adaptation, increased physical-processes complexity, uncertainty quantification and error control; and 3) establishment of experimental databases from ground and flight experiments to develop better understanding of high-speed flows and to provide data to validate and guide the development of simulation tools.

  19. Review of Reliability-Based Design Optimization Approach and Its Integration with Bayesian Method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangnan

    2018-03-01

    A lot of uncertain factors lie in practical engineering, such as external load environment, material property, geometrical shape, initial condition, boundary condition, etc. Reliability method measures the structural safety condition and determine the optimal design parameter combination based on the probabilistic theory. Reliability-based design optimization (RBDO) is the most commonly used approach to minimize the structural cost or other performance under uncertainty variables which combines the reliability theory and optimization. However, it cannot handle the various incomplete information. The Bayesian approach is utilized to incorporate this kind of incomplete information in its uncertainty quantification. In this paper, the RBDO approach and its integration with Bayesian method are introduced.

  20. Data-driven agent-based modeling, with application to rooftop solar adoption

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Haifeng; Vorobeychik, Yevgeniy; Letchford, Joshua

    Agent-based modeling is commonly used for studying complex system properties emergent from interactions among many agents. We present a novel data-driven agent-based modeling framework applied to forecasting individual and aggregate residential rooftop solar adoption in San Diego county. Our first step is to learn a model of individual agent behavior from combined data of individual adoption characteristics and property assessment. We then construct an agent-based simulation with the learned model embedded in artificial agents, and proceed to validate it using a holdout sequence of collective adoption decisions. We demonstrate that the resulting agent-based model successfully forecasts solar adoption trends andmore » provides a meaningful quantification of uncertainty about its predictions. We utilize our model to optimize two classes of policies aimed at spurring solar adoption: one that subsidizes the cost of adoption, and another that gives away free systems to low-income house- holds. We find that the optimal policies derived for the latter class are significantly more efficacious, whereas the policies similar to the current California Solar Initiative incentive scheme appear to have a limited impact on overall adoption trends.« less

  1. Data-driven agent-based modeling, with application to rooftop solar adoption

    DOE PAGES

    Zhang, Haifeng; Vorobeychik, Yevgeniy; Letchford, Joshua; ...

    2016-01-25

    Agent-based modeling is commonly used for studying complex system properties emergent from interactions among many agents. We present a novel data-driven agent-based modeling framework applied to forecasting individual and aggregate residential rooftop solar adoption in San Diego county. Our first step is to learn a model of individual agent behavior from combined data of individual adoption characteristics and property assessment. We then construct an agent-based simulation with the learned model embedded in artificial agents, and proceed to validate it using a holdout sequence of collective adoption decisions. We demonstrate that the resulting agent-based model successfully forecasts solar adoption trends andmore » provides a meaningful quantification of uncertainty about its predictions. We utilize our model to optimize two classes of policies aimed at spurring solar adoption: one that subsidizes the cost of adoption, and another that gives away free systems to low-income house- holds. We find that the optimal policies derived for the latter class are significantly more efficacious, whereas the policies similar to the current California Solar Initiative incentive scheme appear to have a limited impact on overall adoption trends.« less

  2. Implement Method for Automated Testing of Markov Chain Convergence into INVERSE for ORNL12-RS-108J: Advanced Multi-Dimensional Forward and Inverse Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bledsoe, Keith C.

    2015-04-01

    The DiffeRential Evolution Adaptive Metropolis (DREAM) method is a powerful optimization/uncertainty quantification tool used to solve inverse transport problems in Los Alamos National Laboratory’s INVERSE code system. The DREAM method has been shown to be adept at accurate uncertainty quantification, but it can be very computationally demanding. Previously, the DREAM method in INVERSE performed a user-defined number of particle transport calculations. This placed a burden on the user to guess the number of calculations that would be required to accurately solve any given problem. This report discusses a new approach that has been implemented into INVERSE, the Gelman-Rubin convergence metric.more » This metric automatically detects when an appropriate number of transport calculations have been completed and the uncertainty in the inverse problem has been accurately calculated. In a test problem with a spherical geometry, this method was found to decrease the number of transport calculations (and thus time required) to solve a problem by an average of over 90%. In a cylindrical test geometry, a 75% decrease was obtained.« less

  3. Data-Constrained Projections of Methane Fluxes in a Northern Minnesota Peatland in Response to Elevated CO 2 and Warming

    DOE PAGES

    Ma, Shuang; Jiang, Jiang; Huang, Yuanyuan; ...

    2017-10-20

    Large uncertainties exist in predicting responses of wetland methane (CH 4) fluxes to future climate change. However, sources of the uncertainty have not been clearly identified despite the fact that methane production and emission processes have been extensively explored. In this study, we took advantage of manual CH 4 flux measurements under ambient environment from 2011 to 2014 at the Spruce and Peatland Responses Under Changing Environments (SPRUCE) experimental site and developed a data-informed process-based methane module. The module was incorporated into the Terrestrial ECOsystem (TECO) model before its parameters were constrained with multiple years of methane flux data formore » forecasting CH 4 emission under five warming and two elevated CO 2 treatments at SPRUCE. We found that 9°C warming treatments significantly increased methane emission by approximately 400%, and elevated CO 2 treatments stimulated methane emission by 10.4%–23.6% in comparison with ambient conditions. The relative contribution of plant-mediated transport to methane emission decreased from 96% at the control to 92% at the 9°C warming, largely to compensate for an increase in ebullition. The uncertainty in plant-mediated transportation and ebullition increased with warming and contributed to the overall changes of emissions uncertainties. At the same time, our modeling results indicated a significant increase in the emitted CH 4:CO 2 ratio. This result, together with the larger warming potential of CH 4, will lead to a strong positive feedback from terrestrial ecosystems to climate warming. In conclusion, the model-data fusion approach used in this study enabled parameter estimation and uncertainty quantification for forecasting methane fluxes.« less

  4. Data-Constrained Projections of Methane Fluxes in a Northern Minnesota Peatland in Response to Elevated CO2 and Warming

    NASA Astrophysics Data System (ADS)

    Ma, Shuang; Jiang, Jiang; Huang, Yuanyuan; Shi, Zheng; Wilson, Rachel M.; Ricciuto, Daniel; Sebestyen, Stephen D.; Hanson, Paul J.; Luo, Yiqi

    2017-11-01

    Large uncertainties exist in predicting responses of wetland methane (CH4) fluxes to future climate change. However, sources of the uncertainty have not been clearly identified despite the fact that methane production and emission processes have been extensively explored. In this study, we took advantage of manual CH4 flux measurements under ambient environment from 2011 to 2014 at the Spruce and Peatland Responses Under Changing Environments (SPRUCE) experimental site and developed a data-informed process-based methane module. The module was incorporated into the Terrestrial ECOsystem (TECO) model before its parameters were constrained with multiple years of methane flux data for forecasting CH4 emission under five warming and two elevated CO2 treatments at SPRUCE. We found that 9°C warming treatments significantly increased methane emission by approximately 400%, and elevated CO2 treatments stimulated methane emission by 10.4%-23.6% in comparison with ambient conditions. The relative contribution of plant-mediated transport to methane emission decreased from 96% at the control to 92% at the 9°C warming, largely to compensate for an increase in ebullition. The uncertainty in plant-mediated transportation and ebullition increased with warming and contributed to the overall changes of emissions uncertainties. At the same time, our modeling results indicated a significant increase in the emitted CH4:CO2 ratio. This result, together with the larger warming potential of CH4, will lead to a strong positive feedback from terrestrial ecosystems to climate warming. The model-data fusion approach used in this study enabled parameter estimation and uncertainty quantification for forecasting methane fluxes.

  5. Data-Constrained Projections of Methane Fluxes in a Northern Minnesota Peatland in Response to Elevated CO 2 and Warming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Shuang; Jiang, Jiang; Huang, Yuanyuan

    Large uncertainties exist in predicting responses of wetland methane (CH 4) fluxes to future climate change. However, sources of the uncertainty have not been clearly identified despite the fact that methane production and emission processes have been extensively explored. In this study, we took advantage of manual CH 4 flux measurements under ambient environment from 2011 to 2014 at the Spruce and Peatland Responses Under Changing Environments (SPRUCE) experimental site and developed a data-informed process-based methane module. The module was incorporated into the Terrestrial ECOsystem (TECO) model before its parameters were constrained with multiple years of methane flux data formore » forecasting CH 4 emission under five warming and two elevated CO 2 treatments at SPRUCE. We found that 9°C warming treatments significantly increased methane emission by approximately 400%, and elevated CO 2 treatments stimulated methane emission by 10.4%–23.6% in comparison with ambient conditions. The relative contribution of plant-mediated transport to methane emission decreased from 96% at the control to 92% at the 9°C warming, largely to compensate for an increase in ebullition. The uncertainty in plant-mediated transportation and ebullition increased with warming and contributed to the overall changes of emissions uncertainties. At the same time, our modeling results indicated a significant increase in the emitted CH 4:CO 2 ratio. This result, together with the larger warming potential of CH 4, will lead to a strong positive feedback from terrestrial ecosystems to climate warming. In conclusion, the model-data fusion approach used in this study enabled parameter estimation and uncertainty quantification for forecasting methane fluxes.« less

  6. Kalman filter approach for uncertainty quantification in time-resolved laser-induced incandescence.

    PubMed

    Hadwin, Paul J; Sipkens, Timothy A; Thomson, Kevin A; Liu, Fengshan; Daun, Kyle J

    2018-03-01

    Time-resolved laser-induced incandescence (TiRe-LII) data can be used to infer spatially and temporally resolved volume fractions and primary particle size distributions of soot-laden aerosols, but these estimates are corrupted by measurement noise as well as uncertainties in the spectroscopic and heat transfer submodels used to interpret the data. Estimates of the temperature, concentration, and size distribution of soot primary particles within a sample aerosol are typically made by nonlinear regression of modeled spectral incandescence decay, or effective temperature decay, to experimental data. In this work, we employ nonstationary Bayesian estimation techniques to infer aerosol properties from simulated and experimental LII signals, specifically the extended Kalman filter and Schmidt-Kalman filter. These techniques exploit the time-varying nature of both the measurements and the models, and they reveal how uncertainty in the estimates computed from TiRe-LII data evolves over time. Both techniques perform better when compared with standard deterministic estimates; however, we demonstrate that the Schmidt-Kalman filter produces more realistic uncertainty estimates.

  7. Quantifying and predicting interpretational uncertainty in cross-sections

    NASA Astrophysics Data System (ADS)

    Randle, Charles; Bond, Clare; Monaghan, Alison; Lark, Murray

    2015-04-01

    Cross-sections are often constructed from data to create a visual impression of the geologist's interpretation of the sub-surface geology. However as with all interpretations, this vision of the sub-surface geology is uncertain. We have designed and carried out an experiment with the aim of quantifying the uncertainty in geological cross-sections created by experts interpreting borehole data. By analysing different attributes of the data and interpretations we reflect on the main controls on uncertainty. A group of ten expert modellers at the British Geological Survey were asked to interpret an 11.4 km long cross-section from south-east Glasgow, UK. The data provided consisted of map and borehole data of the superficial deposits and shallow bedrock. Each modeller had a unique set of 11 boreholes removed from their dataset, to which their interpretations of the top of the bedrock were compared. This methodology allowed quantification of how far from the 'correct answer' each interpretation is at 11 points along each interpreted cross-section line; through comparison of the interpreted and actual bedrock elevations in the boreholes. This resulted in the collection of 110 measurements of the error to use in further analysis. To determine the potential control on uncertainty various attributes relating to the modeller, the interpretation and the data were recorded. Modellers were asked to fill out a questionnaire asking for information; such as how much 3D modelling experience they had, and how long it took them to complete the interpretation. They were also asked to record their confidence in their interpretations graphically, in the form of a confidence level drawn onto the cross-section. Initial analysis showed the majority of the experts' interpreted bedrock elevations within 5 metres of those recorded in the withheld boreholes. Their distribution is peaked and symmetrical about a mean of zero, indicating that there was no tendency for the experts to either under or over estimate the elevation of the bedrock. More complex analysis was completed in the form of linear mixed effects modelling. The modelling was used to determine if there were any correlations between the error and any other parameter recorded in the questionnaire, section or the initial dataset. This has resulted in the determination of both data based and interpreter based controls on uncertainty, adding insight into how uncertainty can be predicted, as well as how interpretation workflows can be improved. Our results will inform further experiments across a wide variety of geological situations to build understanding and best practice workflows for cross-section interpretation to reduce uncertainty.

  8. Simulating the impacts of disturbances on forest carbon cycling in North America: Processes, data, models, and challenges

    USGS Publications Warehouse

    Liu, Shuguang; Bond-Lamberty, Ben; Hicke, Jeffrey A.; Vargas, Rodrigo; Zhao, Shuqing; Chen, Jing; Edburg, Steven L.; Hu, Yueming; Liu, Jinxun; McGuire, A. David; Xiao, Jingfeng; Keane, Robert; Yuan, Wenping; Tang, Jianwu; Luo, Yiqi; Potter, Christopher; Oeding, Jennifer

    2011-01-01

    Forest disturbances greatly alter the carbon cycle at various spatial and temporal scales. It is critical to understand disturbance regimes and their impacts to better quantify regional and global carbon dynamics. This review of the status and major challenges in representing the impacts of disturbances in modeling the carbon dynamics across North America revealed some major advances and challenges. First, significant advances have been made in representation, scaling, and characterization of disturbances that should be included in regional modeling efforts. Second, there is a need to develop effective and comprehensive process‐based procedures and algorithms to quantify the immediate and long‐term impacts of disturbances on ecosystem succession, soils, microclimate, and cycles of carbon, water, and nutrients. Third, our capability to simulate the occurrences and severity of disturbances is very limited. Fourth, scaling issues have rarely been addressed in continental scale model applications. It is not fully understood which finer scale processes and properties need to be scaled to coarser spatial and temporal scales. Fifth, there are inadequate databases on disturbances at the continental scale to support the quantification of their effects on the carbon balance in North America. Finally, procedures are needed to quantify the uncertainty of model inputs, model parameters, and model structures, and thus to estimate their impacts on overall model uncertainty. Working together, the scientific community interested in disturbance and its impacts can identify the most uncertain issues surrounding the role of disturbance in the North American carbon budget and develop working hypotheses to reduce the uncertainty

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Malley, Daniel; Vesselinov, Velimir V.

    MADSpython (Model analysis and decision support tools in Python) is a code in Python that streamlines the process of using data and models for analysis and decision support using the code MADS. MADS is open-source code developed at LANL and written in C/C++ (MADS; http://mads.lanl.gov; LA-CC-11-035). MADS can work with external models of arbitrary complexity as well as built-in models of flow and transport in porous media. The Python scripts in MADSpython facilitate the generation of input and output file needed by MADS as wells as the external simulators which include FEHM and PFLOTRAN. MADSpython enables a number of data-more » and model-based analyses including model calibration, sensitivity analysis, uncertainty quantification, and decision analysis. MADSpython will be released under GPL V3 license. MADSpython will be distributed as a Git repo at gitlab.com and github.com. MADSpython manual and documentation will be posted at http://madspy.lanl.gov.« less

  10. Multi-fidelity Gaussian process regression for prediction of random fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parussini, L.; Venturi, D., E-mail: venturi@ucsc.edu; Perdikaris, P.

    We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgersmore » equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.« less

  11. Estimating uncertainty of Full Waveform Inversion with Ensemble-based methods

    NASA Astrophysics Data System (ADS)

    Thurin, J.; Brossier, R.; Métivier, L.

    2017-12-01

    Uncertainty estimation is one key feature of tomographic applications for robust interpretation. However, this information is often missing in the frame of large scale linearized inversions, and only the results at convergence are shown, despite the ill-posed nature of the problem. This issue is common in the Full Waveform Inversion community.While few methodologies have already been proposed in the literature, standard FWI workflows do not include any systematic uncertainty quantifications methods yet, but often try to assess the result's quality through cross-comparison with other results from seismic or comparison with other geophysical data. With the development of large seismic networks/surveys, the increase in computational power and the more and more systematic application of FWI, it is crucial to tackle this problem and to propose robust and affordable workflows, in order to address the uncertainty quantification problem faced for near surface targets, crustal exploration, as well as regional and global scales.In this work (Thurin et al., 2017a,b), we propose an approach which takes advantage of the Ensemble Transform Kalman Filter (ETKF) proposed by Bishop et al., (2001), in order to estimate a low-rank approximation of the posterior covariance matrix of the FWI problem, allowing us to evaluate some uncertainty information of the solution. Instead of solving the FWI problem through a Bayesian inversion with the ETKF, we chose to combine a conventional FWI, based on local optimization, and the ETKF strategies. This scheme allows combining the efficiency of local optimization for solving large scale inverse problems and make the sampling of the local solution space possible thanks to its embarrassingly parallel property. References:Bishop, C. H., Etherton, B. J. and Majumdar, S. J., 2001. Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Monthly weather review, 129(3), 420-436.Thurin, J., Brossier, R. and Métivier, L. 2017,a.: Ensemble-Based Uncertainty Estimation in Full Waveform Inversion. 79th EAGE Conference and Exhibition 2017, (12 - 15 June, 2017)Thurin, J., Brossier, R. and Métivier, L. 2017,b.: An Ensemble-Transform Kalman Filter - Full Waveform Inversion scheme for Uncertainty estimation; SEG Technical Program Expanded Abstracts 2012

  12. OR14-V-Uncertainty-PD2La Uncertainty Quantification for Nuclear Safeguards and Nondestructive Assay Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, Andrew D.; Croft, Stephen; McElroy, Robert Dennis

    2017-08-01

    The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically provide error bars and also partition total uncertainty into “random” and “systematic” components so that, for example, an error bar can be developed for the total mass estimate in multiple items. Uncertainty Quantification (UQ) for NDA has always been important, but itmore » is recognized that greater rigor is needed and achievable using modern statistical methods.« less

  13. Uncertainty quantification's role in modeling and simulation planning, and credibility assessment through the predictive capability maturity model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rider, William J.; Witkowski, Walter R.; Mousseau, Vincent Andrew

    2016-04-13

    The importance of credible, trustworthy numerical simulations is obvious especially when using the results for making high-consequence decisions. Determining the credibility of such numerical predictions is much more difficult and requires a systematic approach to assessing predictive capability, associated uncertainties and overall confidence in the computational simulation process for the intended use of the model. This process begins with an evaluation of the computational modeling of the identified, important physics of the simulation for its intended use. This is commonly done through a Phenomena Identification Ranking Table (PIRT). Then an assessment of the evidence basis supporting the ability to computationallymore » simulate these physics can be performed using various frameworks such as the Predictive Capability Maturity Model (PCMM). There were several critical activities that follow in the areas of code and solution verification, validation and uncertainty quantification, which will be described in detail in the following sections. Here, we introduce the subject matter for general applications but specifics are given for the failure prediction project. In addition, the first task that must be completed in the verification & validation procedure is to perform a credibility assessment to fully understand the requirements and limitations of the current computational simulation capability for the specific application intended use. The PIRT and PCMM are tools used at Sandia National Laboratories (SNL) to provide a consistent manner to perform such an assessment. Ideally, all stakeholders should be represented and contribute to perform an accurate credibility assessment. PIRTs and PCMMs are both described in brief detail below and the resulting assessments for an example project are given.« less

  14. A Proposal for Research on Complex Media, Imagining and Uncertainty Quantification

    DTIC Science & Technology

    2013-11-26

    demonstration that the Green’s function for wave propagation in an ergodic cavity can be recovered exactly by cross correlation of signals at two points...the continuation of a project in which we have developed autofocus methods based on a phase space formulation ( Wigner transform) of the array data and

  15. Spatial and temporal quantification of forest residue volumes and delivered costs

    Treesearch

    Lucas A. Wells; Woodam Chung; Nathaniel M. Anderson; John S. Hogland

    2016-01-01

    Growing demand for bioenergy, biofuels, and bioproducts has increased interests in the utilization of biomass residues from forest treatments as feedstock. In areas with limited history of industrial biomass utilization, uncertainties in the quantity, distribution, and cost of biomass production and logistics can hinder the development of new bio-based...

  16. Uncertainty quantification for complex systems with very high dimensional response using Grassmann manifold variations

    NASA Astrophysics Data System (ADS)

    Giovanis, D. G.; Shields, M. D.

    2018-07-01

    This paper addresses uncertainty quantification (UQ) for problems where scalar (or low-dimensional vector) response quantities are insufficient and, instead, full-field (very high-dimensional) responses are of interest. To do so, an adaptive stochastic simulation-based methodology is introduced that refines the probability space based on Grassmann manifold variations. The proposed method has a multi-element character discretizing the probability space into simplex elements using a Delaunay triangulation. For every simplex, the high-dimensional solutions corresponding to its vertices (sample points) are projected onto the Grassmann manifold. The pairwise distances between these points are calculated using appropriately defined metrics and the elements with large total distance are sub-sampled and refined. As a result, regions of the probability space that produce significant changes in the full-field solution are accurately resolved. An added benefit is that an approximation of the solution within each element can be obtained by interpolation on the Grassmann manifold. The method is applied to study the probability of shear band formation in a bulk metallic glass using the shear transformation zone theory.

  17. Semi-supervised Machine Learning for Analysis of Hydrogeochemical Data and Models

    NASA Astrophysics Data System (ADS)

    Vesselinov, Velimir; O'Malley, Daniel; Alexandrov, Boian; Moore, Bryan

    2017-04-01

    Data- and model-based analyses such as uncertainty quantification, sensitivity analysis, and decision support using complex physics models with numerous model parameters and typically require a huge number of model evaluations (on order of 10^6). Furthermore, model simulations of complex physics may require substantial computational time. For example, accounting for simultaneously occurring physical processes such as fluid flow and biogeochemical reactions in heterogeneous porous medium may require several hours of wall-clock computational time. To address these issues, we have developed a novel methodology for semi-supervised machine learning based on Non-negative Matrix Factorization (NMF) coupled with customized k-means clustering. The algorithm allows for automated, robust Blind Source Separation (BSS) of groundwater types (contamination sources) based on model-free analyses of observed hydrogeochemical data. We have also developed reduced order modeling tools, which coupling support vector regression (SVR), genetic algorithms (GA) and artificial and convolutional neural network (ANN/CNN). SVR is applied to predict the model behavior within prior uncertainty ranges associated with the model parameters. ANN and CNN procedures are applied to upscale heterogeneity of the porous medium. In the upscaling process, fine-scale high-resolution models of heterogeneity are applied to inform coarse-resolution models which have improved computational efficiency while capturing the impact of fine-scale effects at the course scale of interest. These techniques are tested independently on a series of synthetic problems. We also present a decision analysis related to contaminant remediation where the developed reduced order models are applied to reproduce groundwater flow and contaminant transport in a synthetic heterogeneous aquifer. The tools are coded in Julia and are a part of the MADS high-performance computational framework (https://github.com/madsjulia/Mads.jl).

  18. Supersonic Retro-Propulsion Experimental Design for Computational Fluid Dynamics Model Validation

    NASA Technical Reports Server (NTRS)

    Berry, Scott A.; Laws, Christopher T.; Kleb, W. L.; Rhode, Matthew N.; Spells, Courtney; McCrea, Andrew C.; Truble, Kerry A.; Schauerhamer, Daniel G.; Oberkampf, William L.

    2011-01-01

    The development of supersonic retro-propulsion, an enabling technology for heavy payload exploration missions to Mars, is the primary focus for the present paper. A new experimental model, intended to provide computational fluid dynamics model validation data, was recently designed for the Langley Research Center Unitary Plan Wind Tunnel Test Section 2. Pre-test computations were instrumental for sizing and refining the model, over the Mach number range of 2.4 to 4.6, such that tunnel blockage and internal flow separation issues would be minimized. A 5-in diameter 70-deg sphere-cone forebody, which accommodates up to four 4:1 area ratio nozzles, followed by a 10-in long cylindrical aftbody was developed for this study based on the computational results. The model was designed to allow for a large number of surface pressure measurements on the forebody and aftbody. Supplemental data included high-speed Schlieren video and internal pressures and temperatures. The run matrix was developed to allow for the quantification of various sources of experimental uncertainty, such as random errors due to run-to-run variations and bias errors due to flow field or model misalignments. Some preliminary results and observations from the test are presented, although detailed analyses of the data and uncertainties are still on going.

  19. A stable isotope model for combined source apportionment and degradation quantification of environmental pollutants

    NASA Astrophysics Data System (ADS)

    Lutz, Stefanie; Van Breukelen, Boris

    2014-05-01

    Natural attenuation can represent a complementary or alternative approach to engineered remediation of polluted sites. In this context, compound specific stable isotope analysis (CSIA) has proven a useful tool, as it can provide evidence of natural attenuation and assess the extent of in-situ degradation based on changes in isotope ratios of pollutants. Moreover, CSIA can allow for source identification and apportionment, which might help to identify major emission sources in complex contamination scenarios. However, degradation and mixing processes in aquifers can lead to changes in isotopic compositions, such that their simultaneous occurrence might complicate combined source apportionment (SA) and assessment of the extent of degradation (ED). We developed a mathematical model (stable isotope sources and sinks model; SISS model) based on the linear stable isotope mixing model and the Rayleigh equation that allows for simultaneous SA and quantification of the ED in a scenario of two emission sources and degradation via one reaction pathway. It was shown that the SISS model with CSIA of at least two elements contained in the pollutant (e.g., C and H in benzene) allows for unequivocal SA even in the presence of degradation-induced isotope fractionation. In addition, the model enables precise quantification of the ED provided degradation follows instantaneous mixing of two sources. If mixing occurs after two sources have degraded separately, the model can still yield a conservative estimate of the overall extent of degradation. The SISS model was validated against virtual data from a two-dimensional reactive transport model. The model results for SA and ED were in good agreement with the simulation results. The application of the SISS model to field data of benzene contamination was, however, challenged by large uncertainties in measured isotope data. Nonetheless, the use of the SISS model provided a better insight into the interplay of mixing and degradation processes at the field site, as it revealed the prevailing contribution of one emission source and a low overall ED. The model can be extended to a larger number of sources and sinks. It may aid in forensics and natural attenuation assessment of soil, groundwater, surface water, or atmospheric pollution.

  20. Uncertainty quantification in application of the enrichment meter principle for nondestructive assay of special nuclear material

    DOE PAGES

    Burr, Tom; Croft, Stephen; Jarman, Kenneth D.

    2015-09-05

    The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed andmore » achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.« less

  1. Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier-Stokes simulations: A data-driven, physics-informed Bayesian approach

    NASA Astrophysics Data System (ADS)

    Xiao, H.; Wu, J.-L.; Wang, J.-X.; Sun, R.; Roy, C. J.

    2016-11-01

    Despite their well-known limitations, Reynolds-Averaged Navier-Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As RANS models are used in the design and safety evaluation of many mission-critical systems such as airplanes and nuclear power plants, quantifying their model-form uncertainties has significant implications in enabling risk-informed decision-making. In this work we develop a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations. Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Both cases are challenging for standard RANS turbulence models. Simulation results suggest that, even with very sparse observations, the obtained posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, where prior knowledge and details of the models are not exploited. This approach has potential implications in many fields in which the governing equations are well understood but the model uncertainty comes from unresolved physical processes.

  2. CASL L2 milestone report : VUQ.Y1.03, %22Enable statistical sensitivity and UQ demonstrations for VERA.%22

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sung, Yixing; Adams, Brian M.; Witkowski, Walter R.

    2011-04-01

    The CASL Level 2 Milestone VUQ.Y1.03, 'Enable statistical sensitivity and UQ demonstrations for VERA,' was successfully completed in March 2011. The VUQ focus area led this effort, in close partnership with AMA, and with support from VRI. DAKOTA was coupled to VIPRE-W thermal-hydraulics simulations representing reactors of interest to address crud-related challenge problems in order to understand the sensitivity and uncertainty in simulation outputs with respect to uncertain operating and model form parameters. This report summarizes work coupling the software tools, characterizing uncertainties, selecting sensitivity and uncertainty quantification algorithms, and analyzing the results of iterative studies. These demonstration studies focusedmore » on sensitivity and uncertainty of mass evaporation rate calculated by VIPRE-W, a key predictor for crud-induced power shift (CIPS).« less

  3. Generalized Polynomial Chaos Based Uncertainty Quantification for Planning MRgLITT Procedures

    PubMed Central

    Fahrenholtz, S.; Stafford, R. J.; Maier, F.; Hazle, J. D.; Fuentes, D.

    2014-01-01

    Purpose A generalized polynomial chaos (gPC) method is used to incorporate constitutive parameter uncertainties within the Pennes representation of bioheat transfer phenomena. The stochastic temperature predictions of the mathematical model are critically evaluated against MR thermometry data for planning MR-guided Laser Induced Thermal Therapies (MRgLITT). Methods Pennes bioheat transfer model coupled with a diffusion theory approximation of laser tissue interaction was implemented as the underlying deterministic kernel. A probabilistic sensitivity study was used to identify parameters that provide the most variance in temperature output. Confidence intervals of the temperature predictions are compared to MR temperature imaging (MRTI) obtained during phantom and in vivo canine (n=4) MRgLITT experiments. The gPC predictions were quantitatively compared to MRTI data using probabilistic linear and temporal profiles as well as 2-D 60 °C isotherms. Results Within the range of physically meaningful constitutive values relevant to the ablative temperature regime of MRgLITT, the sensitivity study indicated that the optical parameters, particularly the anisotropy factor, created the most variance in the stochastic model's output temperature prediction. Further, within the statistical sense considered, a nonlinear model of the temperature and damage dependent perfusion, absorption, and scattering is captured within the confidence intervals of the linear gPC method. Multivariate stochastic model predictions using parameters with the dominant sensitivities show good agreement with experimental MRTI data. Conclusions Given parameter uncertainties and mathematical modeling approximations of the Pennes bioheat model, the statistical framework demonstrates conservative estimates of the therapeutic heating and has potential for use as a computational prediction tool for thermal therapy planning. PMID:23692295

  4. Impact of a regional drought on terrestrial carbon fluxes and atmospheric carbon: results from a coupled carbon cycle model

    NASA Astrophysics Data System (ADS)

    Lee, E.; Koster, R. D.; Ott, L. E.; Weir, B.; Mahanama, S. P. P.; Chang, Y.; Zeng, F.

    2017-12-01

    Understanding the underlying processes that control the carbon cycle is key to predicting future global change. Much of the uncertainty in the magnitude and variability of the atmospheric carbon dioxide (CO2) stems from uncertainty in terrestrial carbon fluxes. Budget-based analyses show that such fluxes exhibit substantial interannual variability, but the relative impacts of temperature and moisture variations on regional and global scales are poorly understood. Here we investigate the impact of a regional drought on terrestrial carbon fluxes and CO2 mixing ratios over North America using the NASA Goddard Earth Observing System (GEOS) Model. Two 48-member ensembles of NASA GEOS-5 simulations with fully coupled land and atmosphere carbon components are performed - a control ensemble and an ensemble with an artificially imposed dry land surface anomaly for three months (April-June) over the lower Mississippi River Valley. Comparison of the results using the ensemble approach allows a direct quantification of the impact of the regional drought on local and proximate carbon exchange at the land surface via the carbon-water feedback processes.

  5. Improved uncertainty quantification in nondestructive assay for nonproliferation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, Tom; Croft, Stephen; Jarman, Ken

    2016-12-01

    This paper illustrates methods to improve uncertainty quantification (UQ) for non-destructive assay (NDA) measurements used in nuclear nonproliferation. First, it is shown that current bottom-up UQ applied to calibration data is not always adequate, for three main reasons: (1) Because there are errors in both the predictors and the response, calibration involves a ratio of random quantities, and calibration data sets in NDA usually consist of only a modest number of samples (3–10); therefore, asymptotic approximations involving quantities needed for UQ such as means and variances are often not sufficiently accurate; (2) Common practice overlooks that calibration implies a partitioningmore » of total error into random and systematic error, and (3) In many NDA applications, test items exhibit non-negligible departures in physical properties from calibration items, so model-based adjustments are used, but item-specific bias remains in some data. Therefore, improved bottom-up UQ using calibration data should predict the typical magnitude of item-specific bias, and the suggestion is to do so by including sources of item-specific bias in synthetic calibration data that is generated using a combination of modeling and real calibration data. Second, for measurements of the same nuclear material item by both the facility operator and international inspectors, current empirical (top-down) UQ is described for estimating operator and inspector systematic and random error variance components. A Bayesian alternative is introduced that easily accommodates constraints on variance components, and is more robust than current top-down methods to the underlying measurement error distributions.« less

  6. Quantification of uncertainties related to the regional application of a conceptual hydrological model in Benin (West Africa)

    NASA Astrophysics Data System (ADS)

    Bormann, H.; Diekkrüger, B.

    2003-04-01

    A conceptual model is presented to simulate the water fluxes of regional catchments in Benin (West Africa). The model is applied in the framework of the IMPETUS project (an integrated approach to the efficient management of scarce water resources in West Africa) which aims to assess the effects of environmental and anthropogenic changes on the regional hydrological processes and on the water availability in Benin. In order to assess the effects of decreasing precipitation and increasing human activities on the hydrological processes in the upper Ouémé valley, a scenario analysis is performed to predict possible changes. Therefore a regional hydrological model is proposed which reproduces the recent hydrological processes, and which is able to consider the changes of landscape properties.The study presented aims to check the validity of the conceptual and lumped model under the conditions of the subhumid tree savannah and therefore analyses the importance of possible sources of uncertainty. Main focus is set on the uncertainties caused by input data, model parameters and model structure. As the model simulates the water fluxes at the catchment outlet of the Térou river (3133 km2) in a sufficient quality, first results of a scenario analysis are presented. Changes of interest are the expected future decrease in amount and temporal structure of the precipitation (e.g. minus X percent precipitation during the whole season versus minus X percent precipitation in the end of the rainy season, alternatively), the decrease in soil water storage capacity which is caused by erosion, and the increasing consumption of ground water for drinking water and agricultural purposes. Resuming from the results obtained, the perspectives of lumped and conceptual models are discussed with special regard to available management options of this kind of models. Advantages and disadvantages compared to alternative model approaches (process based, physics based) are discussed.

  7. Uncertainty Quantification in CO 2 Sequestration Using Surrogate Models from Polynomial Chaos Expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Sahinidis, Nikolaos V.

    2013-03-06

    In this paper, surrogate models are iteratively built using polynomial chaos expansion (PCE) and detailed numerical simulations of a carbon sequestration system. Output variables from a numerical simulator are approximated as polynomial functions of uncertain parameters. Once generated, PCE representations can be used in place of the numerical simulator and often decrease simulation times by several orders of magnitude. However, PCE models are expensive to derive unless the number of terms in the expansion is moderate, which requires a relatively small number of uncertain variables and a low degree of expansion. To cope with this limitation, instead of using amore » classical full expansion at each step of an iterative PCE construction method, we introduce a mixed-integer programming (MIP) formulation to identify the best subset of basis terms in the expansion. This approach makes it possible to keep the number of terms small in the expansion. Monte Carlo (MC) simulation is then performed by substituting the values of the uncertain parameters into the closed-form polynomial functions. Based on the results of MC simulation, the uncertainties of injecting CO{sub 2} underground are quantified for a saline aquifer. Moreover, based on the PCE model, we formulate an optimization problem to determine the optimal CO{sub 2} injection rate so as to maximize the gas saturation (residual trapping) during injection, and thereby minimize the chance of leakage.« less

  8. Calibrating Physical Parameters in House Models Using Aggregate AC Power Demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Stevens, Andrew J.; Lian, Jianming

    For residential houses, the air conditioning (AC) units are one of the major resources that can provide significant flexibility in energy use for the purpose of demand response. To quantify the flexibility, the characteristics of all the houses need to be accurately estimated, so that certain house models can be used to predict the dynamics of the house temperatures in order to adjust the setpoints accordingly to provide demand response while maintaining the same comfort levels. In this paper, we propose an approach using the Reverse Monte Carlo modeling method and aggregate house models to calibrate the distribution parameters ofmore » the house models for a population of residential houses. Given the aggregate AC power demand for the population, the approach can successfully estimate the distribution parameters for the sensitive physical parameters based on our previous uncertainty quantification study, such as the mean of the floor areas of the houses.« less

  9. Uncertainty Quantification of the FUN3D-Predicted NASA CRM Flutter Boundary

    NASA Technical Reports Server (NTRS)

    Stanford, Bret K.; Massey, Steven J.

    2017-01-01

    A nonintrusive point collocation method is used to propagate parametric uncertainties of the flexible Common Research Model, a generic transport configuration, through the unsteady aeroelastic CFD solver FUN3D. A range of random input variables are considered, including atmospheric flow variables, structural variables, and inertial (lumped mass) variables. UQ results are explored for a range of output metrics (with a focus on dynamic flutter stability), for both subsonic and transonic Mach numbers, for two different CFD mesh refinements. A particular focus is placed on computing failure probabilities: the probability that the wing will flutter within the flight envelope.

  10. Numerical prediction of a draft tube flow taking into account uncertain inlet conditions

    NASA Astrophysics Data System (ADS)

    Brugiere, O.; Balarac, G.; Corre, C.; Metais, O.; Flores, E.; Pleroy

    2012-11-01

    The swirling turbulent flow in a hydroturbine draft tube is computed with a non-intrusive uncertainty quantification (UQ) method coupled to Reynolds-Averaged Navier-Stokes (RANS) modelling in order to take into account in the numerical prediction the physical uncertainties existing on the inlet flow conditions. The proposed approach yields not only mean velocity fields to be compared with measured profiles, as is customary in Computational Fluid Dynamics (CFD) practice, but also variance of these quantities from which error bars can be deduced on the computed profiles, thus making more significant the comparison between experiment and computation.

  11. The Need for Precise and Well-documented Experimental Data on Prompt Fission Neutron Spectra from Neutron-induced Fission of {sup 239}Pu

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neudecker, D., E-mail: dneudecker@lanl.gov; Taddeucci, T.N.; Haight, R.C.

    2016-01-15

    The spectrum of neutrons emitted promptly after {sup 239}Pu(n,f)—a so-called prompt fission neutron spectrum (PFNS)—is a quantity of high interest, for instance, for reactor physics and global security. However, there are only few experimental data sets available that are suitable for evaluations. In addition, some of those data sets differ by more than their 1-σ uncertainty boundaries. We present the results of MCNP studies indicating that these differences are partly caused by underestimated multiple scattering contributions, over-corrected background, and inconsistent deconvolution methods. A detailed uncertainty quantification for suitable experimental data was undertaken including these effects, and test-evaluations were performed withmore » the improved uncertainty information. The test-evaluations illustrate that the inadequately estimated effects and detailed uncertainty quantification have an impact on the evaluated PFNS and associated uncertainties as well as the neutron multiplicity of selected critical assemblies. A summary of data and documentation needs to improve the quality of the experimental database is provided based on the results of simulations and test-evaluations. Given the possibly substantial distortion of the PFNS by multiple scattering and background effects, special care should be taken to reduce these effects in future measurements, e.g., by measuring the {sup 239}Pu PFNS as a ratio to either the {sup 235}U or {sup 252}Cf PFNS.« less

  12. Finite Element Model Calibration Approach for Area I-X

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.

    2010-01-01

    Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.

  13. Finite Element Model Calibration Approach for Ares I-X

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.

    2010-01-01

    Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.

  14. Accessing the uncertainties of seismic velocity and anisotropy structure of Northern Great Plains using a transdimensional Bayesian approach

    NASA Astrophysics Data System (ADS)

    Gao, C.; Lekic, V.

    2017-12-01

    Seismic imaging utilizing complementary seismic data provides unique insight on the formation, evolution and current structure of continental lithosphere. While numerous efforts have improved the resolution of seismic structure, the quantification of uncertainties remains challenging due to the non-linearity and the non-uniqueness of geophysical inverse problem. In this project, we use a reverse jump Markov chain Monte Carlo (rjMcMC) algorithm to incorporate seismic observables including Rayleigh and Love wave dispersion, Ps and Sp receiver function to invert for shear velocity (Vs), compressional velocity (Vp), density, and radial anisotropy of the lithospheric structure. The Bayesian nature and the transdimensionality of this approach allow the quantification of the model parameter uncertainties while keeping the models parsimonious. Both synthetic test and inversion of actual data for Ps and Sp receiver functions are performed. We quantify the information gained in different inversions by calculating the Kullback-Leibler divergence. Furthermore, we explore the ability of Rayleigh and Love wave dispersion data to constrain radial anisotropy. We show that when multiple types of model parameters (Vsv, Vsh, and Vp) are inverted simultaneously, the constraints on radial anisotropy are limited by relatively large data uncertainties and trade-off strongly with Vp. We then perform joint inversion of the surface wave dispersion (SWD) and Ps, Sp receiver functions, and show that the constraints on both isotropic Vs and radial anisotropy are significantly improved. To achieve faster convergence of the rjMcMC, we propose a progressive inclusion scheme, and invert SWD measurements and receiver functions from about 400 USArray stations in the Northern Great Plains. We start by only using SWD data due to its fast convergence rate. We then use the average of the ensemble as a starting model for the joint inversion, which is able to resolve distinct seismic signatures of geological structures including the trans-Hudson orogen, Wyoming craton and Yellowstone hotspot. Various analyses are done to access the uncertainties of the seismic velocities and Moho depths. We also address the importance of careful data processing of receiver functions by illustrating artifacts due to unmodelled sediment reverberations.

  15. CCSI and the role of advanced computing in accelerating the commercial deployment of carbon capture systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, David; Agarwal, Deborah A.; Sun, Xin

    2011-09-01

    The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.

  16. CCSI and the role of advanced computing in accelerating the commercial deployment of carbon capture systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, D.; Agarwal, D.; Sun, X.

    2011-01-01

    The Carbon Capture Simulation Initiative is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technology. The CCSI Toolset consists of an integrated multi-scale modeling and simulation framework, which includes extensive use of reduced order models (ROMs) and a comprehensive uncertainty quantification (UQ) methodology. This paper focuses on the interrelation among high performance computing, detailed device simulations, ROMs for scale-bridging, UQ and the integration framework.

  17. Evolution of chemical-specific adjustment factors (CSAF) based on recent international experience; increasing utility and facilitating regulatory acceptance.

    PubMed

    Bhat, Virunya S; Meek, M E Bette; Valcke, Mathieu; English, Caroline; Boobis, Alan; Brown, Richard

    2017-10-01

    The application of chemical-specific toxicokinetic or toxicodynamic data to address interspecies differences and human variability in the quantification of hazard has potential to reduce uncertainty and better characterize variability compared with the use of traditional default or categorically-based uncertainty factors. The present review summarizes the state-of-the-science since the introduction of the World Health Organization/International Programme on Chemical Safety (WHO/IPCS) guidance on chemical-specific adjustment factors (CSAF) in 2005 and the availability of recent applicable guidance including the WHO/IPCS guidance on physiologically-based pharmacokinetic (PBPK) modeling in 2010 as well as the U.S. EPA guidance on data-derived extrapolation factors in 2014. A summary of lessons learned from an analysis of more than 100 case studies from global regulators or published literature illustrates the utility and evolution of CSAF in regulatory decisions. Challenges in CSAF development related to the adequacy of, or confidence in, the supporting data, including verification or validation of PBPK models. The analysis also identified issues related to adequacy of CSAF documentation, such as inconsistent terminology and often limited and/or inconsistent reporting, of both supporting data and/or risk assessment context. Based on this analysis, recommendations for standardized terminology, documentation and relevant interdisciplinary research and engagement are included to facilitate the continuing evolution of CSAF development and guidance.

  18. Uncertainty Quantification of Equilibrium Climate Sensitivity in CCSM4

    NASA Astrophysics Data System (ADS)

    Covey, C. C.; Lucas, D. D.; Tannahill, J.; Klein, R.

    2013-12-01

    Uncertainty in the global mean equilibrium surface warming due to doubled atmospheric CO2, as computed by a "slab ocean" configuration of the Community Climate System Model version 4 (CCSM4), is quantified using 1,039 perturbed-input-parameter simulations. The slab ocean configuration reduces the model's e-folding time when approaching an equilibrium state to ~5 years. This time is much less than for the full ocean configuration, consistent with the shallow depth of the upper well-mixed layer of the ocean represented by the "slab." Adoption of the slab ocean configuration requires the assumption of preset values for the convergence of ocean heat transport beneath the upper well-mixed layer. A standard procedure for choosing these values maximizes agreement with the full ocean version's simulation of the present-day climate when input parameters assume their default values. For each new set of input parameter values, we computed the change in ocean heat transport implied by a "Phase 1" model run in which sea surface temperatures and sea ice concentrations were set equal to present-day values. The resulting total ocean heat transport (= standard value + change implied by Phase 1 run) was then input into "Phase 2" slab ocean runs with varying values of atmospheric CO2. Our uncertainty estimate is based on Latin Hypercube sampling over expert-provided uncertainty ranges of N = 36 adjustable parameters in the atmosphere (CAM4) and sea ice (CICE4) components of CCSM4. Two-dimensional projections of our sampling distribution for the N(N-1)/2 possible pairs of input parameters indicate full coverage of the N-dimensional parameter space, including edges. We used a machine learning-based support vector regression (SVR) statistical model to estimate the probability density function (PDF) of equilibrium warming. This fitting procedure produces a PDF that is qualitatively consistent with the raw histogram of our CCSM4 results. Most of the values from the SVR statistical model are within ~0.1 K of the raw results, well below the inter-decile range inferred below. Independent validation of the fit indicates residual errors that are distributed about zero with a standard deviation of 0.17 K. Analysis of variance shows that the equilibrium warming in CCSM4 is mainly linear in parameter changes. Thus, in accord with the Central Limit Theorem of statistics, the PDF of the warming is approximately Gaussian, i.e. symmetric about its mean value (3.0 K). Since SVR allows for highly nonlinear fits, the symmetry is not an artifact of the fitting procedure. The 10-90 percentile range of the PDF is 2.6-3.4 K, consistent with earlier estimates from CCSM4 but narrower than estimates from other models, which sometimes produce a high-temperature asymmetric tail in the PDF. This work was performed under auspices of the US Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, and was funded by LLNL's Uncertainty Quantification Strategic Initiative (Laboratory Directed Research and Development Project 10-SI-013).

  19. Exploring the usability of isotopically anomalous oxygen in bones and teeth as paleo-CO2-barometer

    NASA Astrophysics Data System (ADS)

    Pack, Andreas; Gehler, Alexander; Süssenberger, Annette

    2013-02-01

    Fluctuations in atmospheric p may have played the key role in global climate throughout Earth's history. For the quantification of past variations in atmospheric p, several geological proxy approaches and geochemical models have been developed. Here, we evaluate a new CO2 proxy approach that is based on the triple oxygen isotope composition (16O, 17O, 18O) of skeletal apatite of terrestrial mammals. Our approach utilizes the relation between an anomaly in 17O of tropospheric air O2 and atmospheric p. The anomaly is transferred from inhaled air O2 to skeletal apatite of mammals. Hence, triple oxygen isotope data of mammalian bioapatite provide information regarding p during the animal's lifetime. The approach was calibrated with a detailed mass balance model that was verified by analyses on a set of recent mammals. We evaluate the potential of this new independent terrestrial paleo-CO2 proxy in a case study including Eocene to Miocene samples. The present investigation provides promising results that are in good agreement with existing proxy- and model data. The uncertainty intrinsic to the proxy is mainly due to uncertainties in physiological parameters.

  20. Model-Data Fusion and Adaptive Sensing for Large Scale Systems: Applications to Atmospheric Release Incidents

    NASA Astrophysics Data System (ADS)

    Madankan, Reza

    All across the world, toxic material clouds are emitted from sources, such as industrial plants, vehicular traffic, and volcanic eruptions can contain chemical, biological or radiological material. With the growing fear of natural, accidental or deliberate release of toxic agents, there is tremendous interest in precise source characterization and generating accurate hazard maps of toxic material dispersion for appropriate disaster management. In this dissertation, an end-to-end framework has been developed for probabilistic source characterization and forecasting of atmospheric release incidents. The proposed methodology consists of three major components which are combined together to perform the task of source characterization and forecasting. These components include Uncertainty Quantification, Optimal Information Collection, and Data Assimilation. Precise approximation of prior statistics is crucial to ensure performance of the source characterization process. In this work, an efficient quadrature based method has been utilized for quantification of uncertainty in plume dispersion models that are subject to uncertain source parameters. In addition, a fast and accurate approach is utilized for the approximation of probabilistic hazard maps, based on combination of polynomial chaos theory and the method of quadrature points. Besides precise quantification of uncertainty, having useful measurement data is also highly important to warranty accurate source parameter estimation. The performance of source characterization is highly affected by applied sensor orientation for data observation. Hence, a general framework has been developed for the optimal allocation of data observation sensors, to improve performance of the source characterization process. The key goal of this framework is to optimally locate a set of mobile sensors such that measurement of textit{better} data is guaranteed. This is achieved by maximizing the mutual information between model predictions and observed data, given a set of kinetic constraints on mobile sensors. Dynamic Programming method has been utilized to solve the resulting optimal control problem. To complete the loop of source characterization process, two different estimation techniques, minimum variance estimation framework and Bayesian Inference method has been developed to fuse model forecast with measurement data. Incomplete information regarding the distribution of associated noise signal in measurement data, is another major challenge in the source characterization of plume dispersion incidents. This frequently happens in data assimilation of atmospheric data by using the satellite imagery. This occurs due to the fact that satellite imagery data can be polluted with noise, depending on weather conditions, clouds, humidity, etc. Unfortunately, there is no accurate procedure to quantify the error in recorded satellite data. Hence, using classical data assimilation methods in this situation is not straight forward. In this dissertation, the basic idea of a novel approach has been proposed to tackle these types of real world problems with more accuracy and robustness. A simple example demonstrating the real-world scenario is presented to validate the developed methodology.

Top