Sample records for input parameter uncertainties

  1. Optimization Under Uncertainty for Electronics Cooling Design

    NASA Astrophysics Data System (ADS)

    Bodla, Karthik K.; Murthy, Jayathi Y.; Garimella, Suresh V.

    Optimization under uncertainty is a powerful methodology used in design and optimization to produce robust, reliable designs. Such an optimization methodology, employed when the input quantities of interest are uncertain, produces output uncertainties, helping the designer choose input parameters that would result in satisfactory thermal solutions. Apart from providing basic statistical information such as mean and standard deviation in the output quantities, auxiliary data from an uncertainty based optimization, such as local and global sensitivities, help the designer decide the input parameter(s) to which the output quantity of interest is most sensitive. This helps the design of experiments based on the most sensitive input parameter(s). A further crucial output of such a methodology is the solution to the inverse problem - finding the allowable uncertainty range in the input parameter(s), given an acceptable uncertainty range in the output quantity of interest...

  2. Estimation and impact assessment of input and parameter uncertainty in predicting groundwater flow with a fully distributed model

    NASA Astrophysics Data System (ADS)

    Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke

    2017-04-01

    Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.

  3. Incorporating uncertainty in RADTRAN 6.0 input files.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennis, Matthew L.; Weiner, Ruth F.; Heames, Terence John

    Uncertainty may be introduced into RADTRAN analyses by distributing input parameters. The MELCOR Uncertainty Engine (Gauntt and Erickson, 2004) has been adapted for use in RADTRAN to determine the parameter shape and minimum and maximum of the distribution, to sample on the distribution, and to create an appropriate RADTRAN batch file. Coupling input parameters is not possible in this initial application. It is recommended that the analyst be very familiar with RADTRAN and able to edit or create a RADTRAN input file using a text editor before implementing the RADTRAN Uncertainty Analysis Module. Installation of the MELCOR Uncertainty Engine ismore » required for incorporation of uncertainty into RADTRAN. Gauntt and Erickson (2004) provides installation instructions as well as a description and user guide for the uncertainty engine.« less

  4. WE-D-BRE-07: Variance-Based Sensitivity Analysis to Quantify the Impact of Biological Uncertainties in Particle Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamp, F.; Brueningk, S.C.; Wilkens, J.J.

    Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g.more » RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of uncertainties. Supported by DFG grant WI 3745/1-1 and DFG cluster of excellence: Munich-Centre for Advanced Photonics.« less

  5. Uncertainty analysis in geospatial merit matrix–based hydropower resource assessment

    DOE PAGES

    Pasha, M. Fayzul K.; Yeasmin, Dilruba; Saetern, Sen; ...

    2016-03-30

    Hydraulic head and mean annual streamflow, two main input parameters in hydropower resource assessment, are not measured at every point along the stream. Translation and interpolation are used to derive these parameters, resulting in uncertainties. This study estimates the uncertainties and their effects on model output parameters: the total potential power and the number of potential locations (stream-reach). These parameters are quantified through Monte Carlo Simulation (MCS) linking with a geospatial merit matrix based hydropower resource assessment (GMM-HRA) Model. The methodology is applied to flat, mild, and steep terrains. Results show that the uncertainty associated with the hydraulic head ismore » within 20% for mild and steep terrains, and the uncertainty associated with streamflow is around 16% for all three terrains. Output uncertainty increases as input uncertainty increases. However, output uncertainty is around 10% to 20% of the input uncertainty, demonstrating the robustness of the GMM-HRA model. Hydraulic head is more sensitive to output parameters in steep terrain than in flat and mild terrains. Furthermore, mean annual streamflow is more sensitive to output parameters in flat terrain.« less

  6. Evaluation of Uncertainty in Constituent Input Parameters for Modeling the Fate of IMX 101 Components

    DTIC Science & Technology

    2017-05-01

    ER D C/ EL T R- 17 -7 Environmental Security Technology Certification Program (ESTCP) Evaluation of Uncertainty in Constituent Input...Environmental Security Technology Certification Program (ESTCP) ERDC/EL TR-17-7 May 2017 Evaluation of Uncertainty in Constituent Input Parameters...Environmental Evaluation and Characterization Sys- tem (TREECS™) was applied to a groundwater site and a surface water site to evaluate the sensitivity

  7. Evaluation of Uncertainty in Constituent Input Parameters for Modeling the Fate of RDX

    DTIC Science & Technology

    2015-07-01

    exercise was to evaluate the importance of chemical -specific model input parameters, the impacts of their uncertainty, and the potential benefits of... chemical -specific inputs for RDX that were determined to be sensitive with relatively high uncertainty: these included the soil-water linear...Koc for organic chemicals . The EFS values provided for log Koc of RDX were 1.72 and 1.95. OBJECTIVE: TREECS™ (http://el.erdc.usace.army.mil/treecs

  8. Joint analysis of input and parametric uncertainties in watershed water quality modeling: A formal Bayesian approach

    NASA Astrophysics Data System (ADS)

    Han, Feng; Zheng, Yi

    2018-06-01

    Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.

  9. Ignoring correlation in uncertainty and sensitivity analysis in life cycle assessment: what is the risk?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groen, E.A., E-mail: Evelyne.Groen@gmail.com; Heijungs, R.; Leiden University, Einsteinweg 2, Leiden 2333 CC

    Life cycle assessment (LCA) is an established tool to quantify the environmental impact of a product. A good assessment of uncertainty is important for making well-informed decisions in comparative LCA, as well as for correctly prioritising data collection efforts. Under- or overestimation of output uncertainty (e.g. output variance) will lead to incorrect decisions in such matters. The presence of correlations between input parameters during uncertainty propagation, can increase or decrease the the output variance. However, most LCA studies that include uncertainty analysis, ignore correlations between input parameters during uncertainty propagation, which may lead to incorrect conclusions. Two approaches to include correlationsmore » between input parameters during uncertainty propagation and global sensitivity analysis were studied: an analytical approach and a sampling approach. The use of both approaches is illustrated for an artificial case study of electricity production. Results demonstrate that both approaches yield approximately the same output variance and sensitivity indices for this specific case study. Furthermore, we demonstrate that the analytical approach can be used to quantify the risk of ignoring correlations between input parameters during uncertainty propagation in LCA. We demonstrate that: (1) we can predict if including correlations among input parameters in uncertainty propagation will increase or decrease output variance; (2) we can quantify the risk of ignoring correlations on the output variance and the global sensitivity indices. Moreover, this procedure requires only little data. - Highlights: • Ignoring correlation leads to under- or overestimation of the output variance. • We demonstrated that the risk of ignoring correlation can be quantified. • The procedure proposed is generally applicable in life cycle assessment. • In some cases, ignoring correlation has a minimal effect on decision-making tools.« less

  10. Explicitly integrating parameter, input, and structure uncertainties into Bayesian Neural Networks for probabilistic hydrologic forecasting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xuesong; Liang, Faming; Yu, Beibei

    2011-11-09

    Estimating uncertainty of hydrologic forecasting is valuable to water resources and other relevant decision making processes. Recently, Bayesian Neural Networks (BNNs) have been proved powerful tools for quantifying uncertainty of streamflow forecasting. In this study, we propose a Markov Chain Monte Carlo (MCMC) framework to incorporate the uncertainties associated with input, model structure, and parameter into BNNs. This framework allows the structure of the neural networks to change by removing or adding connections between neurons and enables scaling of input data by using rainfall multipliers. The results show that the new BNNs outperform the BNNs that only consider uncertainties associatedmore » with parameter and model structure. Critical evaluation of posterior distribution of neural network weights, number of effective connections, rainfall multipliers, and hyper-parameters show that the assumptions held in our BNNs are not well supported. Further understanding of characteristics of different uncertainty sources and including output error into the MCMC framework are expected to enhance the application of neural networks for uncertainty analysis of hydrologic forecasting.« less

  11. Toward Scientific Numerical Modeling

    NASA Technical Reports Server (NTRS)

    Kleb, Bil

    2007-01-01

    Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and verifying that numerical models are translated into code correctly, however, are necessary first steps toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. To address these two shortcomings, two proposals are offered: (1) an unobtrusive mechanism to document input parameter uncertainties in situ and (2) an adaptation of the Scientific Method to numerical model development and deployment. Because these two steps require changes in the computational simulation community to bear fruit, they are presented in terms of the Beckhard-Harris-Gleicher change model.

  12. Assessing the importance of rainfall uncertainty on hydrological models with different spatial and temporal scale

    NASA Astrophysics Data System (ADS)

    Nossent, Jiri; Pereira, Fernando; Bauwens, Willy

    2015-04-01

    Precipitation is one of the key inputs for hydrological models. As long as the values of the hydrological model parameters are fixed, a variation of the rainfall input is expected to induce a change in the model output. Given the increased awareness of uncertainty on rainfall records, it becomes more important to understand the impact of this input - output dynamic. Yet, modellers often still have the intention to mimic the observed flow, whatever the deviation of the employed records from the actual rainfall might be, by recklessly adapting the model parameter values. But is it actually possible to vary the model parameter values in such a way that a certain (observed) model output can be generated based on inaccurate rainfall inputs? Thus, how important is the rainfall uncertainty for the model output with respect to the model parameter importance? To address this question, we apply the Sobol' sensitivity analysis method to assess and compare the importance of the rainfall uncertainty and the model parameters on the output of the hydrological model. In order to be able to treat the regular model parameters and input uncertainty in the same way, and to allow a comparison of their influence, a possible approach is to represent the rainfall uncertainty by a parameter. To tackle the latter issue, we apply so called rainfall multipliers on hydrological independent storm events, as a probabilistic parameter representation of the possible rainfall variation. As available rainfall records are very often point measurements at a discrete time step (hourly, daily, monthly,…), they contain uncertainty due to a latent lack of spatial and temporal variability. The influence of the latter variability can also be different for hydrological models with different spatial and temporal scale. Therefore, we perform the sensitivity analyses on a semi-distributed model (SWAT) and a lumped model (NAM). The assessment and comparison of the importance of the rainfall uncertainty and the model parameters is achieved by considering different scenarios for the included parameters and the state of the models.

  13. Uncertainty quantification of Antarctic contribution to sea-level rise using the fast Elementary Thermomechanical Ice Sheet (f.ETISh) model

    NASA Astrophysics Data System (ADS)

    Bulthuis, Kevin; Arnst, Maarten; Pattyn, Frank; Favier, Lionel

    2017-04-01

    Uncertainties in sea-level rise projections are mostly due to uncertainties in Antarctic ice-sheet predictions (IPCC AR5 report, 2013), because key parameters related to the current state of the Antarctic ice sheet (e.g. sub-ice-shelf melting) and future climate forcing are poorly constrained. Here, we propose to improve the predictions of Antarctic ice-sheet behaviour using new uncertainty quantification methods. As opposed to ensemble modelling (Bindschadler et al., 2013) which provides a rather limited view on input and output dispersion, new stochastic methods (Le Maître and Knio, 2010) can provide deeper insight into the impact of uncertainties on complex system behaviour. Such stochastic methods usually begin with deducing a probabilistic description of input parameter uncertainties from the available data. Then, the impact of these input parameter uncertainties on output quantities is assessed by estimating the probability distribution of the outputs by means of uncertainty propagation methods such as Monte Carlo methods or stochastic expansion methods. The use of such uncertainty propagation methods in glaciology may be computationally costly because of the high computational complexity of ice-sheet models. This challenge emphasises the importance of developing reliable and computationally efficient ice-sheet models such as the f.ETISh ice-sheet model (Pattyn, 2015), a new fast thermomechanical coupled ice sheet/ice shelf model capable of handling complex and critical processes such as the marine ice-sheet instability mechanism. Here, we apply these methods to investigate the role of uncertainties in sub-ice-shelf melting, calving rates and climate projections in assessing Antarctic contribution to sea-level rise for the next centuries using the f.ETISh model. We detail the methods and show results that provide nominal values and uncertainty bounds for future sea-level rise as a reflection of the impact of the input parameter uncertainties under consideration, as well as a ranking of the input parameter uncertainties in the order of the significance of their contribution to uncertainty in future sea-level rise. In addition, we discuss how limitations posed by the available information (poorly constrained data) pose challenges that motivate our current research.

  14. Local Sensitivity of Predicted CO 2 Injectivity and Plume Extent to Model Inputs for the FutureGen 2.0 site

    DOE PAGES

    Zhang, Z. Fred; White, Signe K.; Bonneville, Alain; ...

    2014-12-31

    Numerical simulations have been used for estimating CO2 injectivity, CO2 plume extent, pressure distribution, and Area of Review (AoR), and for the design of CO2 injection operations and monitoring network for the FutureGen project. The simulation results are affected by uncertainties associated with numerous input parameters, the conceptual model, initial and boundary conditions, and factors related to injection operations. Furthermore, the uncertainties in the simulation results also vary in space and time. The key need is to identify those uncertainties that critically impact the simulation results and quantify their impacts. We introduce an approach to determine the local sensitivity coefficientmore » (LSC), defined as the response of the output in percent, to rank the importance of model inputs on outputs. The uncertainty of an input with higher sensitivity has larger impacts on the output. The LSC is scalable by the error of an input parameter. The composite sensitivity of an output to a subset of inputs can be calculated by summing the individual LSC values. We propose a local sensitivity coefficient method and applied it to the FutureGen 2.0 Site in Morgan County, Illinois, USA, to investigate the sensitivity of input parameters and initial conditions. The conceptual model for the site consists of 31 layers, each of which has a unique set of input parameters. The sensitivity of 11 parameters for each layer and 7 inputs as initial conditions is then investigated. For CO2 injectivity and plume size, about half of the uncertainty is due to only 4 or 5 of the 348 inputs and 3/4 of the uncertainty is due to about 15 of the inputs. The initial conditions and the properties of the injection layer and its neighbour layers contribute to most of the sensitivity. Overall, the simulation outputs are very sensitive to only a small fraction of the inputs. However, the parameters that are important for controlling CO2 injectivity are not the same as those controlling the plume size. The three most sensitive inputs for injectivity were the horizontal permeability of Mt Simon 11 (the injection layer), the initial fracture-pressure gradient, and the residual aqueous saturation of Mt Simon 11, while those for the plume area were the initial salt concentration, the initial pressure, and the initial fracture-pressure gradient. The advantages of requiring only a single set of simulation results, scalability to the proper parameter errors, and easy calculation of the composite sensitivities make this approach very cost-effective for estimating AoR uncertainty and guiding cost-effective site characterization, injection well design, and monitoring network design for CO2 storage projects.« less

  15. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC

  16. 6 DOF synchronized control for spacecraft formation flying with input constraint and parameter uncertainties.

    PubMed

    Lv, Yueyong; Hu, Qinglei; Ma, Guangfu; Zhou, Jiakang

    2011-10-01

    This paper treats the problem of synchronized control of spacecraft formation flying (SFF) in the presence of input constraint and parameter uncertainties. More specifically, backstepping based robust control is first developed for the total 6 DOF dynamic model of SFF with parameter uncertainties, in which the model consists of relative translation and attitude rotation. Then this controller is redesigned to deal with the input constraint problem by incorporating a command filter such that the generated control could be implementable even under physical or operating constraints on the control input. The convergence of the proposed control algorithms is proved by the Lyapunov stability theorem. Compared with conventional methods, illustrative simulations of spacecraft formation flying are conducted to verify the effectiveness of the proposed approach to achieve the spacecraft track the desired attitude and position trajectories in a synchronized fashion even in the presence of uncertainties, external disturbances and control saturation constraint. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions

    NASA Astrophysics Data System (ADS)

    Jung, J. Y.; Niemann, J. D.; Greimann, B. P.

    2016-12-01

    Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.

  18. Uncertainty in predictions of oil spill trajectories in a coastal zone

    NASA Astrophysics Data System (ADS)

    Sebastião, P.; Guedes Soares, C.

    2006-12-01

    A method is introduced to determine the uncertainties in the predictions of oil spill trajectories using a classic oil spill model. The method considers the output of the oil spill model as a function of random variables, which are the input parameters, and calculates the standard deviation of the output results which provides a measure of the uncertainty of the model as a result of the uncertainties of the input parameters. In addition to a single trajectory that is calculated by the oil spill model using the mean values of the parameters, a band of trajectories can be defined when various simulations are done taking into account the uncertainties of the input parameters. This band of trajectories defines envelopes of the trajectories that are likely to be followed by the spill given the uncertainties of the input. The method was applied to an oil spill that occurred in 1989 near Sines in the southwestern coast of Portugal. This model represented well the distinction between a wind driven part that remained offshore, and a tide driven part that went ashore. For both parts, the method defined two trajectory envelopes, one calculated exclusively with the wind fields, and the other using wind and tidal currents. In both cases reasonable approximation to the observed results was obtained. The envelope of likely trajectories that is obtained with the uncertainty modelling proved to give a better interpretation of the trajectories that were simulated by the oil spill model.

  19. Effects of uncertainties in hydrological modelling. A case study of a mountainous catchment in Southern Norway

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjørn; Steinsland, Ingelin; Johansen, Stian Solvang; Petersen-Øverleir, Asgeir; Kolberg, Sjur

    2016-05-01

    In this study, we explore the effect of uncertainty and poor observation quality on hydrological model calibration and predictions. The Osali catchment in Western Norway was selected as case study and an elevation distributed HBV-model was used. We systematically evaluated the effect of accounting for uncertainty in parameters, precipitation input, temperature input and streamflow observations. For precipitation and temperature we accounted for the interpolation uncertainty, and for streamflow we accounted for rating curve uncertainty. Further, the effects of poorer quality of precipitation input and streamflow observations were explored. Less information about precipitation was obtained by excluding the nearest precipitation station from the analysis, while reduced information about the streamflow was obtained by omitting the highest and lowest streamflow observations when estimating the rating curve. The results showed that including uncertainty in the precipitation and temperature inputs has a negligible effect on the posterior distribution of parameters and for the Nash-Sutcliffe (NS) efficiency for the predicted flows, while the reliability and the continuous rank probability score (CRPS) improves. Less information in precipitation input resulted in a shift in the water balance parameter Pcorr, a model producing smoother streamflow predictions, giving poorer NS and CRPS, but higher reliability. The effect of calibrating the hydrological model using streamflow observations based on different rating curves is mainly seen as variability in the water balance parameter Pcorr. When evaluating predictions, the best evaluation scores were not achieved for the rating curve used for calibration, but for rating curves giving smoother streamflow observations. Less information in streamflow influenced the water balance parameter Pcorr, and increased the spread in evaluation scores by giving both better and worse scores.

  20. Input Uncertainty and its Implications on Parameter Assessment in Hydrologic and Hydroclimatic Modelling Studies

    NASA Astrophysics Data System (ADS)

    Chowdhury, S.; Sharma, A.

    2005-12-01

    Hydrological model inputs are often derived from measurements at point locations taken at discrete time steps. The nature of uncertainty associated with such inputs is thus a function of the quality and number of measurements available in time. A change in these characteristics (such as a change in the number of rain-gauge inputs used to derive spatially averaged rainfall) results in inhomogeneity in the associated distributional profile. Ignoring such uncertainty can lead to models that aim to simulate based on the observed input variable instead of the true measurement, resulting in a biased representation of the underlying system dynamics as well as an increase in both bias and the predictive uncertainty in simulations. This is especially true of cases where the nature of uncertainty likely in the future is significantly different to that in the past. Possible examples include situations where the accuracy of the catchment averaged rainfall has increased substantially due to an increase in the rain-gauge density, or accuracy of climatic observations (such as sea surface temperatures) increased due to the use of more accurate remote sensing technologies. We introduce here a method to ascertain the true value of parameters in the presence of additive uncertainty in model inputs. This method, known as SIMulation EXtrapolation (SIMEX, [Cook, 1994]) operates on the basis of an empirical relationship between parameters and the level of additive input noise (or uncertainty). The method starts with generating a series of alternate realisations of model inputs by artificially adding white noise in increasing multiples of the known error variance. The alternate realisations lead to alternate sets of parameters that are increasingly biased with respect to the truth due to the increased variability in the inputs. Once several such realisations have been drawn, one is able to formulate an empirical relationship between the parameter values and the level of additive noise present. SIMEX is based on theory that the trend in alternate parameters can be extrapolated back to the notional error free zone. We illustrate the utility of SIMEX in a synthetic rainfall-runoff modelling scenario and an application to study the dependence of uncertain distributed sea surface temperature anomalies with an indicator of the El Nino Southern Oscillation, the Southern Oscillation Index (SOI). The errors in rainfall data and its affect is explored using Sacramento rainfall runoff model. The rainfall uncertainty is assumed to be multiplicative and temporally invariant. The model used to relate the sea surface temperature anomalies (SSTA) to the SOI is assumed to be of a linear form. The nature of uncertainty in the SSTA is additive and varies with time. The SIMEX framework allows assessment of the relationship between the error free inputs and response. Cook, J.R., Stefanski, L. A., Simulation-Extrapolation Estimation in Parametric Measurement Error Models, Journal of the American Statistical Association, 89 (428), 1314-1328, 1994.

  1. Using global sensitivity analysis of demographic models for ecological impact assessment.

    PubMed

    Aiello-Lammens, Matthew E; Akçakaya, H Resit

    2017-02-01

    Population viability analysis (PVA) is widely used to assess population-level impacts of environmental changes on species. When combined with sensitivity analysis, PVA yields insights into the effects of parameter and model structure uncertainty. This helps researchers prioritize efforts for further data collection so that model improvements are efficient and helps managers prioritize conservation and management actions. Usually, sensitivity is analyzed by varying one input parameter at a time and observing the influence that variation has over model outcomes. This approach does not account for interactions among parameters. Global sensitivity analysis (GSA) overcomes this limitation by varying several model inputs simultaneously. Then, regression techniques allow measuring the importance of input-parameter uncertainties. In many conservation applications, the goal of demographic modeling is to assess how different scenarios of impact or management cause changes in a population. This is challenging because the uncertainty of input-parameter values can be confounded with the effect of impacts and management actions. We developed a GSA method that separates model outcome uncertainty resulting from parameter uncertainty from that resulting from projected ecological impacts or simulated management actions, effectively separating the 2 main questions that sensitivity analysis asks. We applied this method to assess the effects of predicted sea-level rise on Snowy Plover (Charadrius nivosus). A relatively small number of replicate models (approximately 100) resulted in consistent measures of variable importance when not trying to separate the effects of ecological impacts from parameter uncertainty. However, many more replicate models (approximately 500) were required to separate these effects. These differences are important to consider when using demographic models to estimate ecological impacts of management actions. © 2016 Society for Conservation Biology.

  2. A game theoretic controller for a linear time-invariant system with parameter uncertainty and its application to the Space Station

    NASA Technical Reports Server (NTRS)

    Rhee, Ihnseok; Speyer, Jason L.

    1990-01-01

    A game theoretic controller is developed for a linear time-invariant system with parameter uncertainties in system and input matrices. The input-output decomposition modeling for the plant uncertainty is adopted. The uncertain dynamic system is represented as an internal feedback loop in which the system is assumed forced by fictitious disturbance caused by the parameter uncertainty. By considering the input and the fictitious disturbance as two noncooperative players, a differential game problem is constructed. It is shown that the resulting time invariant controller stabilizes the uncertain system for a prescribed uncertainty bound. This game theoretic controller is applied to the momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Inclusion of the external disturbance torque to the design procedure results in a dynamical feedback controller which consists of conventional PID control and cyclic disturbance rejection filter. It is shown that the game theoretic design, comparing to the LQR design or pole placement design, improves the stability robustness with respect to inertia variations.

  3. Full uncertainty quantification of N2O and NO emissions using the biogeochemical model LandscapeDNDC on site and regional scale

    NASA Astrophysics Data System (ADS)

    Haas, Edwin; Santabarbara, Ignacio; Kiese, Ralf; Butterbach-Bahl, Klaus

    2017-04-01

    Numerical simulation models are increasingly used to estimate greenhouse gas emissions at site to regional / national scale and are outlined as the most advanced methodology (Tier 3) in the framework of UNFCCC reporting. Process-based models incorporate the major processes of the carbon and nitrogen cycle of terrestrial ecosystems and are thus thought to be widely applicable at various conditions and spatial scales. Process based modelling requires high spatial resolution input data on soil properties, climate drivers and management information. The acceptance of model based inventory calculations depends on the assessment of the inventory's uncertainty (model, input data and parameter induced uncertainties). In this study we fully quantify the uncertainty in modelling soil N2O and NO emissions from arable, grassland and forest soils using the biogeochemical model LandscapeDNDC. We address model induced uncertainty (MU) by contrasting two different soil biogeochemistry modules within LandscapeDNDC. The parameter induced uncertainty (PU) was assessed by using joint parameter distributions for key parameters describing microbial C and N turnover processes as obtained by different Bayesian calibration studies for each model configuration. Input data induced uncertainty (DU) was addressed by Bayesian calibration of soil properties, climate drivers and agricultural management practices data. For the MU, DU and PU we performed several hundred simulations each to contribute to the individual uncertainty assessment. For the overall uncertainty quantification we assessed the model prediction probability, followed by sampled sets of input datasets and parameter distributions. Statistical analysis of the simulation results have been used to quantify the overall full uncertainty of the modelling approach. With this study we can contrast the variation in model results to the different sources of uncertainties for each ecosystem. Further we have been able to perform a fully uncertainty analysis for modelling N2O and NO emissions from arable, grassland and forest soils necessary for the comprehensibility of modelling results. We have applied the methodology to a regional inventory to assess the overall modelling uncertainty for a regional N2O and NO emissions inventory for the state of Saxony, Germany.

  4. Analytic uncertainty and sensitivity analysis of models with input correlations

    NASA Astrophysics Data System (ADS)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  5. Sensitivity analysis of respiratory parameter uncertainties: impact of criterion function form and constraints.

    PubMed

    Lutchen, K R

    1990-08-01

    A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.

  6. Testing the robustness of management decisions to uncertainty: Everglades restoration scenarios.

    PubMed

    Fuller, Michael M; Gross, Louis J; Duke-Sylvester, Scott M; Palmer, Mark

    2008-04-01

    To effectively manage large natural reserves, resource managers must prepare for future contingencies while balancing the often conflicting priorities of different stakeholders. To deal with these issues, managers routinely employ models to project the response of ecosystems to different scenarios that represent alternative management plans or environmental forecasts. Scenario analysis is often used to rank such alternatives to aid the decision making process. However, model projections are subject to uncertainty in assumptions about model structure, parameter values, environmental inputs, and subcomponent interactions. We introduce an approach for testing the robustness of model-based management decisions to the uncertainty inherent in complex ecological models and their inputs. We use relative assessment to quantify the relative impacts of uncertainty on scenario ranking. To illustrate our approach we consider uncertainty in parameter values and uncertainty in input data, with specific examples drawn from the Florida Everglades restoration project. Our examples focus on two alternative 30-year hydrologic management plans that were ranked according to their overall impacts on wildlife habitat potential. We tested the assumption that varying the parameter settings and inputs of habitat index models does not change the rank order of the hydrologic plans. We compared the average projected index of habitat potential for four endemic species and two wading-bird guilds to rank the plans, accounting for variations in parameter settings and water level inputs associated with hypothetical future climates. Indices of habitat potential were based on projections from spatially explicit models that are closely tied to hydrology. For the American alligator, the rank order of the hydrologic plans was unaffected by substantial variation in model parameters. By contrast, simulated major shifts in water levels led to reversals in the ranks of the hydrologic plans in 24.1-30.6% of the projections for the wading bird guilds and several individual species. By exposing the differential effects of uncertainty, relative assessment can help resource managers assess the robustness of scenario choice in model-based policy decisions.

  7. Sparse Polynomial Chaos Surrogate for ACME Land Model via Iterative Bayesian Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.

    2015-12-01

    For computationally expensive climate models, Monte-Carlo approaches of exploring the input parameter space are often prohibitive due to slow convergence with respect to ensemble size. To alleviate this, we build inexpensive surrogates using uncertainty quantification (UQ) methods employing Polynomial Chaos (PC) expansions that approximate the input-output relationships using as few model evaluations as possible. However, when many uncertain input parameters are present, such UQ studies suffer from the curse of dimensionality. In particular, for 50-100 input parameters non-adaptive PC representations have infeasible numbers of basis terms. To this end, we develop and employ Weighted Iterative Bayesian Compressive Sensing to learn the most important input parameter relationships for efficient, sparse PC surrogate construction with posterior uncertainty quantified due to insufficient data. Besides drastic dimensionality reduction, the uncertain surrogate can efficiently replace the model in computationally intensive studies such as forward uncertainty propagation and variance-based sensitivity analysis, as well as design optimization and parameter estimation using observational data. We applied the surrogate construction and variance-based uncertainty decomposition to Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  8. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    NASA Astrophysics Data System (ADS)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.

  9. A Bayesian approach to model structural error and input variability in groundwater modeling

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.

    2015-12-01

    Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.

  10. Uncertainties in Galactic Chemical Evolution Models

    DOE PAGES

    Cote, Benoit; Ritter, Christian; Oshea, Brian W.; ...

    2016-06-15

    Here we use a simple one-zone galactic chemical evolution model to quantify the uncertainties generated by the input parameters in numerical predictions for a galaxy with properties similar to those of the Milky Way. We compiled several studies from the literature to gather the current constraints for our simulations regarding the typical value and uncertainty of the following seven basic parameters: the lower and upper mass limits of the stellar initial mass function (IMF), the slope of the high-mass end of the stellar IMF, the slope of the delay-time distribution function of Type Ia supernovae (SNe Ia), the number ofmore » SNe Ia per M ⊙ formed, the total stellar mass formed, and the final mass of gas. We derived a probability distribution function to express the range of likely values for every parameter, which were then included in a Monte Carlo code to run several hundred simulations with randomly selected input parameters. This approach enables us to analyze the predicted chemical evolution of 16 elements in a statistical manner by identifying the most probable solutions along with their 68% and 95% confidence levels. Our results show that the overall uncertainties are shaped by several input parameters that individually contribute at different metallicities, and thus at different galactic ages. The level of uncertainty then depends on the metallicity and is different from one element to another. Among the seven input parameters considered in this work, the slope of the IMF and the number of SNe Ia are currently the two main sources of uncertainty. The thicknesses of the uncertainty bands bounded by the 68% and 95% confidence levels are generally within 0.3 and 0.6 dex, respectively. When looking at the evolution of individual elements as a function of galactic age instead of metallicity, those same thicknesses range from 0.1 to 0.6 dex for the 68% confidence levels and from 0.3 to 1.0 dex for the 95% confidence levels. The uncertainty in our chemical evolution model does not include uncertainties relating to stellar yields, star formation and merger histories, and modeling assumptions.« less

  11. Assessment of input uncertainty by seasonally categorized latent variables using SWAT

    USDA-ARS?s Scientific Manuscript database

    Watershed processes have been explored with sophisticated simulation models for the past few decades. It has been stated that uncertainty attributed to alternative sources such as model parameters, forcing inputs, and measured data should be incorporated during the simulation process. Among varyin...

  12. A Monte Carlo Uncertainty Analysis of Ozone Trend Predictions in a Two Dimensional Model. Revision

    NASA Technical Reports Server (NTRS)

    Considine, D. B.; Stolarski, R. S.; Hollandsworth, S. M.; Jackman, C. H.; Fleming, E. L.

    1998-01-01

    We use Monte Carlo analysis to estimate the uncertainty in predictions of total O3 trends between 1979 and 1995 made by the Goddard Space Flight Center (GSFC) two-dimensional (2D) model of stratospheric photochemistry and dynamics. The uncertainty is caused by gas-phase chemical reaction rates, photolysis coefficients, and heterogeneous reaction parameters which are model inputs. The uncertainty represents a lower bound to the total model uncertainty assuming the input parameter uncertainties are characterized correctly. Each of the Monte Carlo runs was initialized in 1970 and integrated for 26 model years through the end of 1995. This was repeated 419 times using input parameter sets generated by Latin Hypercube Sampling. The standard deviation (a) of the Monte Carlo ensemble of total 03 trend predictions is used to quantify the model uncertainty. The 34% difference between the model trend in globally and annually averaged total O3 using nominal inputs and atmospheric trends calculated from Nimbus 7 and Meteor 3 total ozone mapping spectrometer (TOMS) version 7 data is less than the 46% calculated 1 (sigma), model uncertainty, so there is no significant difference between the modeled and observed trends. In the northern hemisphere midlatitude spring the modeled and observed total 03 trends differ by more than 1(sigma) but less than 2(sigma), which we refer to as marginal significance. We perform a multiple linear regression analysis of the runs which suggests that only a few of the model reactions contribute significantly to the variance in the model predictions. The lack of significance in these comparisons suggests that they are of questionable use as guides for continuing model development. Large model/measurement differences which are many multiples of the input parameter uncertainty are seen in the meridional gradients of the trend and the peak-to-peak variations in the trends over an annual cycle. These discrepancies unambiguously indicate model formulation problems and provide a measure of model performance which can be used in attempts to improve such models.

  13. Attributing uncertainty in streamflow simulations due to variable inputs via the Quantile Flow Deviation metric

    NASA Astrophysics Data System (ADS)

    Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish

    2018-06-01

    Every model to characterise a real world process is affected by uncertainty. Selecting a suitable model is a vital aspect of engineering planning and design. Observation or input errors make the prediction of modelled responses more uncertain. By way of a recently developed attribution metric, this study is aimed at developing a method for analysing variability in model inputs together with model structure variability to quantify their relative contributions in typical hydrological modelling applications. The Quantile Flow Deviation (QFD) metric is used to assess these alternate sources of uncertainty. The Australian Water Availability Project (AWAP) precipitation data for four different Australian catchments is used to analyse the impact of spatial rainfall variability on simulated streamflow variability via the QFD. The QFD metric attributes the variability in flow ensembles to uncertainty associated with the selection of a model structure and input time series. For the case study catchments, the relative contribution of input uncertainty due to rainfall is higher than that due to potential evapotranspiration, and overall input uncertainty is significant compared to model structure and parameter uncertainty. Overall, this study investigates the propagation of input uncertainty in a daily streamflow modelling scenario and demonstrates how input errors manifest across different streamflow magnitudes.

  14. MOESHA: A genetic algorithm for automatic calibration and estimation of parameter uncertainty and sensitivity of hydrologic models

    EPA Science Inventory

    Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...

  15. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation

    USGS Publications Warehouse

    Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.

    2015-01-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust decisions.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cote, Benoit; Ritter, Christian; Oshea, Brian W.

    Here we use a simple one-zone galactic chemical evolution model to quantify the uncertainties generated by the input parameters in numerical predictions for a galaxy with properties similar to those of the Milky Way. We compiled several studies from the literature to gather the current constraints for our simulations regarding the typical value and uncertainty of the following seven basic parameters: the lower and upper mass limits of the stellar initial mass function (IMF), the slope of the high-mass end of the stellar IMF, the slope of the delay-time distribution function of Type Ia supernovae (SNe Ia), the number ofmore » SNe Ia per M ⊙ formed, the total stellar mass formed, and the final mass of gas. We derived a probability distribution function to express the range of likely values for every parameter, which were then included in a Monte Carlo code to run several hundred simulations with randomly selected input parameters. This approach enables us to analyze the predicted chemical evolution of 16 elements in a statistical manner by identifying the most probable solutions along with their 68% and 95% confidence levels. Our results show that the overall uncertainties are shaped by several input parameters that individually contribute at different metallicities, and thus at different galactic ages. The level of uncertainty then depends on the metallicity and is different from one element to another. Among the seven input parameters considered in this work, the slope of the IMF and the number of SNe Ia are currently the two main sources of uncertainty. The thicknesses of the uncertainty bands bounded by the 68% and 95% confidence levels are generally within 0.3 and 0.6 dex, respectively. When looking at the evolution of individual elements as a function of galactic age instead of metallicity, those same thicknesses range from 0.1 to 0.6 dex for the 68% confidence levels and from 0.3 to 1.0 dex for the 95% confidence levels. The uncertainty in our chemical evolution model does not include uncertainties relating to stellar yields, star formation and merger histories, and modeling assumptions.« less

  17. Model parameter uncertainty analysis for an annual field-scale P loss model

    NASA Astrophysics Data System (ADS)

    Bolster, Carl H.; Vadas, Peter A.; Boykin, Debbie

    2016-08-01

    Phosphorous (P) fate and transport models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. Because all models are simplifications of complex systems, there will exist an inherent amount of uncertainty associated with their predictions. It is therefore important that efforts be directed at identifying, quantifying, and communicating the different sources of model uncertainties. In this study, we conducted an uncertainty analysis with the Annual P Loss Estimator (APLE) model. Our analysis included calculating parameter uncertainties and confidence and prediction intervals for five internal regression equations in APLE. We also estimated uncertainties of the model input variables based on values reported in the literature. We then predicted P loss for a suite of fields under different management and climatic conditions while accounting for uncertainties in the model parameters and inputs and compared the relative contributions of these two sources of uncertainty to the overall uncertainty associated with predictions of P loss. Both the overall magnitude of the prediction uncertainties and the relative contributions of the two sources of uncertainty varied depending on management practices and field characteristics. This was due to differences in the number of model input variables and the uncertainties in the regression equations associated with each P loss pathway. Inspection of the uncertainties in the five regression equations brought attention to a previously unrecognized limitation with the equation used to partition surface-applied fertilizer P between leaching and runoff losses. As a result, an alternate equation was identified that provided similar predictions with much less uncertainty. Our results demonstrate how a thorough uncertainty and model residual analysis can be used to identify limitations with a model. Such insight can then be used to guide future data collection and model development and evaluation efforts.

  18. Aeroservoelastic Uncertainty Model Identification from Flight Data

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.

    2001-01-01

    Uncertainty modeling is a critical element in the estimation of robust stability margins for stability boundary prediction and robust flight control system development. There has been a serious deficiency to date in aeroservoelastic data analysis with attention to uncertainty modeling. Uncertainty can be estimated from flight data using both parametric and nonparametric identification techniques. The model validation problem addressed in this paper is to identify aeroservoelastic models with associated uncertainty structures from a limited amount of controlled excitation inputs over an extensive flight envelope. The challenge to this problem is to update analytical models from flight data estimates while also deriving non-conservative uncertainty descriptions consistent with the flight data. Multisine control surface command inputs and control system feedbacks are used as signals in a wavelet-based modal parameter estimation procedure for model updates. Transfer function estimates are incorporated in a robust minimax estimation scheme to get input-output parameters and error bounds consistent with the data and model structure. Uncertainty estimates derived from the data in this manner provide an appropriate and relevant representation for model development and robust stability analysis. This model-plus-uncertainty identification procedure is applied to aeroservoelastic flight data from the NASA Dryden Flight Research Center F-18 Systems Research Aircraft.

  19. Development code for sensitivity and uncertainty analysis of input on the MCNPX for neutronic calculation in PWR core

    NASA Astrophysics Data System (ADS)

    Hartini, Entin; Andiwijayakusuma, Dinan

    2014-09-01

    This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuel type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.

  20. Development code for sensitivity and uncertainty analysis of input on the MCNPX for neutronic calculation in PWR core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartini, Entin, E-mail: entin@batan.go.id; Andiwijayakusuma, Dinan, E-mail: entin@batan.go.id

    2014-09-30

    This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuelmore » type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.« less

  1. Uncertainty and Sensitivity Analyses of a Pebble Bed HTGR Loss of Cooling Event

    DOE PAGES

    Strydom, Gerhard

    2013-01-01

    The Very High Temperature Reactor Methods Development group at the Idaho National Laboratory identified the need for a defensible and systematic uncertainty and sensitivity approach in 2009. This paper summarizes the results of an uncertainty and sensitivity quantification investigation performed with the SUSA code, utilizing the International Atomic Energy Agency CRP 5 Pebble Bed Modular Reactor benchmark and the INL code suite PEBBED-THERMIX. Eight model input parameters were selected for inclusion in this study, and after the input parameters variations and probability density functions were specified, a total of 800 steady state and depressurized loss of forced cooling (DLOFC) transientmore » PEBBED-THERMIX calculations were performed. The six data sets were statistically analyzed to determine the 5% and 95% DLOFC peak fuel temperature tolerance intervals with 95% confidence levels. It was found that the uncertainties in the decay heat and graphite thermal conductivities were the most significant contributors to the propagated DLOFC peak fuel temperature uncertainty. No significant differences were observed between the results of Simple Random Sampling (SRS) or Latin Hypercube Sampling (LHS) data sets, and use of uniform or normal input parameter distributions also did not lead to any significant differences between these data sets.« less

  2. Tolerance and UQ4SIM: Nimble Uncertainty Documentation and Analysis Software

    NASA Technical Reports Server (NTRS)

    Kleb, Bil

    2008-01-01

    Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and variabilities is a necessary first step toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. The basic premise of uncertainty markup is to craft a tolerance and tagging mini-language that offers a natural, unobtrusive presentation and does not depend on parsing each type of input file format. Each file is marked up with tolerances and optionally, associated tags that serve to label the parameters and their uncertainties. The evolution of such a language, often called a Domain Specific Language or DSL, is given in [1], but in final form it parallels tolerances specified on an engineering drawing, e.g., 1 +/- 0.5, 5 +/- 10%, 2 +/- 10 where % signifies percent and o signifies order of magnitude. Tags, necessary for error propagation, can be added by placing a quotation-mark-delimited tag after the tolerance, e.g., 0.7 +/- 20% 'T_effective'. In addition, tolerances might have different underlying distributions, e.g., Uniform, Normal, or Triangular, or the tolerances may merely be intervals due to lack of knowledge (uncertainty). Finally, to address pragmatic considerations such as older models that require specific number-field formats, C-style format specifiers can be appended to the tolerance like so, 1.35 +/- 10U_3.2f. As an example of use, consider figure 1, where a chemical reaction input file is has been marked up to include tolerances and tags per table 1. Not only does the technique provide a natural method of specifying tolerances, but it also servers as in situ documentation of model uncertainties. This tolerance language comes with a utility to strip the tolerances (and tags), to provide a path to the nominal model parameter file. And, as shown in [1], having the ability to quickly mark and identify model parameter uncertainties facilitates error propagation, which in turn yield output uncertainties.

  3. Uncertainty analyses of CO2 plume expansion subsequent to wellbore CO2 leakage into aquifers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou, Zhangshuan; Bacon, Diana H.; Engel, David W.

    2014-08-01

    In this study, we apply an uncertainty quantification (UQ) framework to CO2 sequestration problems. In one scenario, we look at the risk of wellbore leakage of CO2 into a shallow unconfined aquifer in an urban area; in another scenario, we study the effects of reservoir heterogeneity on CO2 migration. We combine various sampling approaches (quasi-Monte Carlo, probabilistic collocation, and adaptive sampling) in order to reduce the number of forward calculations while trying to fully explore the input parameter space and quantify the input uncertainty. The CO2 migration is simulated using the PNNL-developed simulator STOMP-CO2e (the water-salt-CO2 module). For computationally demandingmore » simulations with 3D heterogeneity fields, we combined the framework with a scalable version module, eSTOMP, as the forward modeling simulator. We built response curves and response surfaces of model outputs with respect to input parameters, to look at the individual and combined effects, and identify and rank the significance of the input parameters.« less

  4. Parameter and model uncertainty in a life-table model for fine particles (PM2.5): a statistical modeling study

    PubMed Central

    Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha

    2007-01-01

    Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results. PMID:17714598

  5. Parameter and model uncertainty in a life-table model for fine particles (PM2.5): a statistical modeling study.

    PubMed

    Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha

    2007-08-23

    The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results.

  6. Sensitivity analysis and uncertainty estimation in ash concentration simulations and tephra deposit daily forecasted at Mt. Etna, in Italy

    NASA Astrophysics Data System (ADS)

    Prestifilippo, Michele; Scollo, Simona; Tarantola, Stefano

    2015-04-01

    The uncertainty in volcanic ash forecasts may depend on our knowledge of the model input parameters and our capability to represent the dynamic of an incoming eruption. Forecasts help governments to reduce risks associated with volcanic eruptions and for this reason different kinds of analysis that help to understand the effect that each input parameter has on model outputs are necessary. We present an iterative approach based on the sequential combination of sensitivity analysis, parameter estimation procedure and Monte Carlo-based uncertainty analysis, applied to the lagrangian volcanic ash dispersal model PUFF. We modify the main input parameters as the total mass, the total grain-size distribution, the plume thickness, the shape of the eruption column, the sedimentation models and the diffusion coefficient, perform thousands of simulations and analyze the results. The study is carried out on two different Etna scenarios: the sub-plinian eruption of 22 July 1998 that formed an eruption column rising 12 km above sea level and lasted some minutes and the lava fountain eruption having features similar to the 2011-2013 events that produced eruption column high up to several kilometers above sea level and lasted some hours. Sensitivity analyses and uncertainty estimation results help us to address the measurements that volcanologists should perform during volcanic crisis to reduce the model uncertainty.

  7. SOARCA Peach Bottom Atomic Power Station Long-Term Station Blackout Uncertainty Analysis: Knowledge Advancement.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauntt, Randall O.; Mattie, Patrick D.; Bixler, Nathan E.

    2014-02-01

    This paper describes the knowledge advancements from the uncertainty analysis for the State-of- the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout accident scenario at the Peach Bottom Atomic Power Station. This work assessed key MELCOR and MELCOR Accident Consequence Code System, Version 2 (MACCS2) modeling uncertainties in an integrated fashion to quantify the relative importance of each uncertain input on potential accident progression, radiological releases, and off-site consequences. This quantitative uncertainty analysis provides measures of the effects on consequences, of each of the selected uncertain parameters both individually and in interaction with other parameters. The results measure the modelmore » response (e.g., variance in the output) to uncertainty in the selected input. Investigation into the important uncertain parameters in turn yields insights into important phenomena for accident progression and off-site consequences. This uncertainty analysis confirmed the known importance of some parameters, such as failure rate of the Safety Relief Valve in accident progression modeling and the dry deposition velocity in off-site consequence modeling. The analysis also revealed some new insights, such as dependent effect of cesium chemical form for different accident progressions. (auth)« less

  8. Monitoring and modeling as a continuing learning process: the use of hydrological models in a general probabilistic framework.

    NASA Astrophysics Data System (ADS)

    Baroni, G.; Gräff, T.; Reinstorf, F.; Oswald, S. E.

    2012-04-01

    Nowadays uncertainty and sensitivity analysis are considered basic tools for the assessment of hydrological models and the evaluation of the most important sources of uncertainty. In this context, in the last decades several methods have been developed and applied in different hydrological conditions. However, in most of the cases, the studies have been done by investigating mainly the influence of the parameter uncertainty on the simulated outputs and few approaches tried to consider also other sources of uncertainty i.e. input and model structure. Moreover, several constrains arise when spatially distributed parameters are involved. To overcome these limitations a general probabilistic framework based on Monte Carlo simulations and the Sobol method has been proposed. In this study, the general probabilistic framework was applied at field scale using a 1D physical-based hydrological model (SWAP). Furthermore, the framework was extended at catchment scale in combination with a spatially distributed hydrological model (SHETRAN). The models are applied in two different experimental sites in Germany: a relatively flat cropped field close to Potsdam (Brandenburg) and a small mountainous catchment with agricultural land use (Schaefertal, Harz Mountains). For both cases, input and parameters are considered as major sources of uncertainty. Evaluation of the models was based on soil moisture detected at plot scale in different depths and, for the catchment site, also with daily discharge values. The study shows how the framework can take into account all the various sources of uncertainty i.e. input data, parameters (either in scalar or spatially distributed form) and model structures. The framework can be used in a loop in order to optimize further monitoring activities used to improve the performance of the model. In the particular applications, the results show how the sources of uncertainty are specific for each process considered. The influence of the input data as well as the presence of compensating errors become clear by the different processes simulated.

  9. Balancing the stochastic description of uncertainties as a function of hydrologic model complexity

    NASA Astrophysics Data System (ADS)

    Del Giudice, D.; Reichert, P.; Albert, C.; Kalcic, M.; Logsdon Muenich, R.; Scavia, D.; Bosch, N. S.; Michalak, A. M.

    2016-12-01

    Uncertainty analysis is becoming an important component of forecasting water and pollutant fluxes in urban and rural environments. Properly accounting for errors in the modeling process can help to robustly assess the uncertainties associated with the inputs (e.g. precipitation) and outputs (e.g. runoff) of hydrological models. In recent years we have investigated several Bayesian methods to infer the parameters of a mechanistic hydrological model along with those of the stochastic error component. The latter describes the uncertainties of model outputs and possibly inputs. We have adapted our framework to a variety of applications, ranging from predicting floods in small stormwater systems to nutrient loads in large agricultural watersheds. Given practical constraints, we discuss how in general the number of quantities to infer probabilistically varies inversely with the complexity of the mechanistic model. Most often, when evaluating a hydrological model of intermediate complexity, we can infer the parameters of the model as well as of the output error model. Describing the output errors as a first order autoregressive process can realistically capture the "downstream" effect of inaccurate inputs and structure. With simpler runoff models we can additionally quantify input uncertainty by using a stochastic rainfall process. For complex hydrologic transport models, instead, we show that keeping model parameters fixed and just estimating time-dependent output uncertainties could be a viable option. The common goal across all these applications is to create time-dependent prediction intervals which are both reliable (cover the nominal amount of validation data) and precise (are as narrow as possible). In conclusion, we recommend focusing both on the choice of the hydrological model and of the probabilistic error description. The latter can include output uncertainty only, if the model is computationally-expensive, or, with simpler models, it can separately account for different sources of errors like in the inputs and the structure of the model.

  10. Uncertainty Quantification in Multi-Scale Coronary Simulations Using Multi-resolution Expansion

    NASA Astrophysics Data System (ADS)

    Tran, Justin; Schiavazzi, Daniele; Ramachandra, Abhay; Kahn, Andrew; Marsden, Alison

    2016-11-01

    Computational simulations of coronary flow can provide non-invasive information on hemodynamics that can aid in surgical planning and research on disease propagation. In this study, patient-specific geometries of the aorta and coronary arteries are constructed from CT imaging data and finite element flow simulations are carried out using the open source software SimVascular. Lumped parameter networks (LPN), consisting of circuit representations of vascular hemodynamics and coronary physiology, are used as coupled boundary conditions for the solver. The outputs of these simulations depend on a set of clinically-derived input parameters that define the geometry and boundary conditions, however their values are subjected to uncertainty. We quantify the effects of uncertainty from two sources: uncertainty in the material properties of the vessel wall and uncertainty in the lumped parameter models whose values are estimated by assimilating patient-specific clinical and literature data. We use a generalized multi-resolution chaos approach to propagate the uncertainty. The advantages of this approach lies in its ability to support inputs sampled from arbitrary distributions and its built-in adaptivity that efficiently approximates stochastic responses characterized by steep gradients.

  11. Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.

    2002-01-01

    An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  12. Uncertainty Analysis in 3D Equilibrium Reconstruction

    DOE PAGES

    Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.

    2018-02-21

    Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less

  13. Uncertainty Analysis in 3D Equilibrium Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.

    Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less

  14. Uncertainty propagation by using spectral methods: A practical application to a two-dimensional turbulence fluid model

    NASA Astrophysics Data System (ADS)

    Riva, Fabio; Milanese, Lucio; Ricci, Paolo

    2017-10-01

    To reduce the computational cost of the uncertainty propagation analysis, which is used to study the impact of input parameter variations on the results of a simulation, a general and simple to apply methodology based on decomposing the solution to the model equations in terms of Chebyshev polynomials is discussed. This methodology, based on the work by Scheffel [Am. J. Comput. Math. 2, 173-193 (2012)], approximates the model equation solution with a semi-analytic expression that depends explicitly on time, spatial coordinates, and input parameters. By employing a weighted residual method, a set of nonlinear algebraic equations for the coefficients appearing in the Chebyshev decomposition is then obtained. The methodology is applied to a two-dimensional Braginskii model used to simulate plasma turbulence in basic plasma physics experiments and in the scrape-off layer of tokamaks, in order to study the impact on the simulation results of the input parameter that describes the parallel losses. The uncertainty that characterizes the time-averaged density gradient lengths, time-averaged densities, and fluctuation density level are evaluated. A reasonable estimate of the uncertainty of these distributions can be obtained with a single reduced-cost simulation.

  15. Calibration-induced uncertainty of the EPIC model to estimate climate change impact on global maize yield

    NASA Astrophysics Data System (ADS)

    Xiong, Wei; Skalský, Rastislav; Porter, Cheryl H.; Balkovič, Juraj; Jones, James W.; Yang, Di

    2016-09-01

    Understanding the interactions between agricultural production and climate is necessary for sound decision-making in climate policy. Gridded and high-resolution crop simulation has emerged as a useful tool for building this understanding. Large uncertainty exists in this utilization, obstructing its capacity as a tool to devise adaptation strategies. Increasing focus has been given to sources of uncertainties for climate scenarios, input-data, and model, but uncertainties due to model parameter or calibration are still unknown. Here, we use publicly available geographical data sets as input to the Environmental Policy Integrated Climate model (EPIC) for simulating global-gridded maize yield. Impacts of climate change are assessed up to the year 2099 under a climate scenario generated by HadEM2-ES under RCP 8.5. We apply five strategies by shifting one specific parameter in each simulation to calibrate the model and understand the effects of calibration. Regionalizing crop phenology or harvest index appears effective to calibrate the model for the globe, but using various values of phenology generates pronounced difference in estimated climate impact. However, projected impacts of climate change on global maize production are consistently negative regardless of the parameter being adjusted. Different values of model parameter result in a modest uncertainty at global level, with difference of the global yield change less than 30% by the 2080s. The uncertainty subjects to decrease if applying model calibration or input data quality control. Calibration has a larger effect at local scales, implying the possible types and locations for adaptation.

  16. A robust momentum management and attitude control system for the space station

    NASA Technical Reports Server (NTRS)

    Speyer, J. L.; Rhee, Ihnseok

    1991-01-01

    A game theoretic controller is synthesized for momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Full state information is assumed since attitude rates are assumed to be very assurately measured. By an input-output decomposition of the uncertainty in the system matrices, the parameter uncertainties in the dynamic system are represented as an unknown gain associated with an internal feedback loop (IFL). The input and output matrices associated with the IFL form directions through which the uncertain parameters affect system response. If the quadratic form of the IFL output augments the cost criterion, then enhanced parameter robustness is anticipated. By considering the input and the input disturbance from the IFL as two noncooperative players, a linear-quadratic differential game is constructed. The solution in the form of a linear controller is used for synthesis. Inclusion of the external disturbance torques results in a dynamic feedback controller which consists of conventional PID (proportional integral derivative) control and cyclic disturbance rejection filters. It is shown that the game theoretic design allows large variations in the inertias in directions of importance.

  17. Robust momentum management and attitude control system for the Space Station

    NASA Technical Reports Server (NTRS)

    Rhee, Ihnseok; Speyer, Jason L.

    1992-01-01

    A game theoretic controller is synthesized for momentum management and attitude control of the Space Station in the presence of uncertainties in the moments of inertia. Full state information is assumed since attitude rates are assumed to be very accurately measured. By an input-output decomposition of the uncertainty in the system matrices, the parameter uncertainties in the dynamic system are represented as an unknown gain associated with an internal feedback loop (IFL). The input and output matrices associated with the IFL form directions through which the uncertain parameters affect system response. If the quadratic form of the IFL output augments the cost criterion, then enhanced parameter robustness is anticipated. By considering the input and the input disturbance from the IFL as two noncooperative players, a linear-quadratic differential game is constructed. The solution in the form of a linear controller is used for synthesis. Inclusion of the external disturbance torques results in a dynamic feedback controller which consists of conventional PID (proportional integral derivative) control and cyclic disturbance rejection filters. It is shown that the game theoretic design allows large variations in the inertias in directions of importance.

  18. Uncertainty analysis of an inflow forecasting model: extension of the UNEEC machine learning-based method

    NASA Astrophysics Data System (ADS)

    Pianosi, Francesca; Lal Shrestha, Durga; Solomatine, Dimitri

    2010-05-01

    This research presents an extension of UNEEC (Uncertainty Estimation based on Local Errors and Clustering, Shrestha and Solomatine, 2006, 2008 & Solomatine and Shrestha, 2009) method in the direction of explicit inclusion of parameter uncertainty. UNEEC method assumes that there is an optimal model and the residuals of the model can be used to assess the uncertainty of the model prediction. It is assumed that all sources of uncertainty including input, parameter and model structure uncertainty are explicitly manifested in the model residuals. In this research, theses assumptions are relaxed, and the UNEEC method is extended to consider parameter uncertainty as well (abbreviated as UNEEC-P). In UNEEC-P, first we use Monte Carlo (MC) sampling in parameter space to generate N model realizations (each of which is a time series), estimate the prediction quantiles based on the empirical distribution functions of the model residuals considering all the residual realizations, and only then apply the standard UNEEC method that encapsulates the uncertainty of a hydrologic model (expressed by quantiles of the error distribution) in a machine learning model (e.g., ANN). UNEEC-P is applied first to a linear regression model of synthetic data, and then to a real case study of forecasting inflow to lake Lugano in northern Italy. The inflow forecasting model is a stochastic heteroscedastic model (Pianosi and Soncini-Sessa, 2009). The preliminary results show that the UNEEC-P method produces wider uncertainty bounds, which is consistent with the fact that the method considers also parameter uncertainty of the optimal model. In the future UNEEC method will be further extended to consider input and structure uncertainty which will provide more realistic estimation of model predictions.

  19. Uncertainty in BMP evaluation and optimization for watershed management

    NASA Astrophysics Data System (ADS)

    Chaubey, I.; Cibin, R.; Sudheer, K.; Her, Y.

    2012-12-01

    Use of computer simulation models have increased substantially to make watershed management decisions and to develop strategies for water quality improvements. These models are often used to evaluate potential benefits of various best management practices (BMPs) for reducing losses of pollutants from sources areas into receiving waterbodies. Similarly, use of simulation models in optimizing selection and placement of best management practices under single (maximization of crop production or minimization of pollutant transport) and multiple objective functions has increased recently. One of the limitations of the currently available assessment and optimization approaches is that the BMP strategies are considered deterministic. Uncertainties in input data (e.g. precipitation, streamflow, sediment, nutrient and pesticide losses measured, land use) and model parameters may result in considerable uncertainty in watershed response under various BMP options. We have developed and evaluated options to include uncertainty in BMP evaluation and optimization for watershed management. We have also applied these methods to evaluate uncertainty in ecosystem services from mixed land use watersheds. In this presentation, we will discuss methods to to quantify uncertainties in BMP assessment and optimization solutions due to uncertainties in model inputs and parameters. We have used a watershed model (Soil and Water Assessment Tool or SWAT) to simulate the hydrology and water quality in mixed land use watershed located in Midwest USA. The SWAT model was also used to represent various BMPs in the watershed needed to improve water quality. SWAT model parameters, land use change parameters, and climate change parameters were considered uncertain. It was observed that model parameters, land use and climate changes resulted in considerable uncertainties in BMP performance in reducing P, N, and sediment loads. In addition, climate change scenarios also affected uncertainties in SWAT simulated crop yields. Considerable uncertainties in the net cost and the water quality improvements resulted due to uncertainties in land use, climate change, and model parameter values.

  20. Assessment of uncertainties of the models used in thermal-hydraulic computer codes

    NASA Astrophysics Data System (ADS)

    Gricay, A. S.; Migrov, Yu. A.

    2015-09-01

    The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.

  1. Quantifying uncertainty and sensitivity in sea ice models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urrego Blanco, Jorge Rolando; Hunke, Elizabeth Clare; Urban, Nathan Mark

    The Los Alamos Sea Ice model has a number of input parameters for which accurate values are not always well established. We conduct a variance-based sensitivity analysis of hemispheric sea ice properties to 39 input parameters. The method accounts for non-linear and non-additive effects in the model.

  2. Uncertainty characterization and quantification in air pollution models. Application to the ADMS-Urban model.

    NASA Astrophysics Data System (ADS)

    Debry, E.; Malherbe, L.; Schillinger, C.; Bessagnet, B.; Rouil, L.

    2009-04-01

    Evaluation of human exposure to atmospheric pollution usually requires the knowledge of pollutants concentrations in ambient air. In the framework of PAISA project, which studies the influence of socio-economical status on relationships between air pollution and short term health effects, the concentrations of gas and particle pollutants are computed over Strasbourg with the ADMS-Urban model. As for any modeling result, simulated concentrations come with uncertainties which have to be characterized and quantified. There are several sources of uncertainties related to input data and parameters, i.e. fields used to execute the model like meteorological fields, boundary conditions and emissions, related to the model formulation because of incomplete or inaccurate treatment of dynamical and chemical processes, and inherent to the stochastic behavior of atmosphere and human activities [1]. Our aim is here to assess the uncertainties of the simulated concentrations with respect to input data and model parameters. In this scope the first step consisted in bringing out the input data and model parameters that contribute most effectively to space and time variability of predicted concentrations. Concentrations of several pollutants were simulated for two months in winter 2004 and two months in summer 2004 over five areas of Strasbourg. The sensitivity analysis shows the dominating influence of boundary conditions and emissions. Among model parameters, the roughness and Monin-Obukhov lengths appear to have non neglectable local effects. Dry deposition is also an important dynamic process. The second step of the characterization and quantification of uncertainties consists in attributing a probability distribution to each input data and model parameter and in propagating the joint distribution of all data and parameters into the model so as to associate a probability distribution to the modeled concentrations. Several analytical and numerical methods exist to perform an uncertainty analysis. We chose the Monte Carlo method which has already been applied to atmospheric dispersion models [2, 3, 4]. The main advantage of this method is to be insensitive to the number of perturbed parameters but its drawbacks are its computation cost and its slow convergence. In order to speed up this one we used the method of antithetic variable which takes adavantage of the symmetry of probability laws. The air quality model simulations were carried out by the Association for study and watching of Atmospheric Pollution in Alsace (ASPA). The output concentrations distributions can then be updated with a Bayesian method. This work is part of an INERIS Research project also aiming at assessing the uncertainty of the CHIMERE dispersion model used in the Prev'Air forecasting platform (www.prevair.org) in order to deliver more accurate predictions. (1) Rao, K.S. Uncertainty Analysis in Atmospheric Dispersion Modeling, Pure and Applied Geophysics, 2005, 162, 1893-1917. (2) Beekmann, M. and Derognat, C. Monte Carlo uncertainty analysis of a regional-scale transport chemistry model constrained by measurements from the Atmospheric Pollution Over the PAris Area (ESQUIF) campaign, Journal of Geophysical Research, 2003, 108, 8559-8576. (3) Hanna, S.R. and Lu, Z. and Frey, H.C. and Wheeler, N. and Vukovich, J. and Arunachalam, S. and Fernau, M. and Hansen, D.A. Uncertainties in predicted ozone concentrations due to input uncertainties for the UAM-V photochemical grid model applied to the July 1995 OTAG domain, Atmospheric Environment, 2001, 35, 891-903. (4) Romanowicz, R. and Higson, H. and Teasdale, I. Bayesian uncertainty estimation methodology applied to air pollution modelling, Environmetrics, 2000, 11, 351-371.

  3. Prediction and assimilation of surf-zone processes using a Bayesian network: Part II: Inverse models

    USGS Publications Warehouse

    Plant, Nathaniel G.; Holland, K. Todd

    2011-01-01

    A Bayesian network model has been developed to simulate a relatively simple problem of wave propagation in the surf zone (detailed in Part I). Here, we demonstrate that this Bayesian model can provide both inverse modeling and data-assimilation solutions for predicting offshore wave heights and depth estimates given limited wave-height and depth information from an onshore location. The inverse method is extended to allow data assimilation using observational inputs that are not compatible with deterministic solutions of the problem. These inputs include sand bar positions (instead of bathymetry) and estimates of the intensity of wave breaking (instead of wave-height observations). Our results indicate that wave breaking information is essential to reduce prediction errors. In many practical situations, this information could be provided from a shore-based observer or from remote-sensing systems. We show that various combinations of the assimilated inputs significantly reduce the uncertainty in the estimates of water depths and wave heights in the model domain. Application of the Bayesian network model to new field data demonstrated significant predictive skill (R2 = 0.7) for the inverse estimate of a month-long time series of offshore wave heights. The Bayesian inverse results include uncertainty estimates that were shown to be most accurate when given uncertainty in the inputs (e.g., depth and tuning parameters). Furthermore, the inverse modeling was extended to directly estimate tuning parameters associated with the underlying wave-process model. The inverse estimates of the model parameters not only showed an offshore wave height dependence consistent with results of previous studies but the uncertainty estimates of the tuning parameters also explain previously reported variations in the model parameters.

  4. Constant-Elasticity-of-Substitution Simulation

    NASA Technical Reports Server (NTRS)

    Reiter, G.

    1986-01-01

    Program simulates constant elasticity-of-substitution (CES) production function. CES function used by economic analysts to examine production costs as well as uncertainties in production. User provides such input parameters as price of labor, price of capital, and dispersion levels. CES minimizes expected cost to produce capital-uncertainty pair. By varying capital-value input, one obtains series of capital-uncertainty pairs. Capital-uncertainty pairs then used to generate several cost curves. CES program menu driven and features specific print menu for examining selected output curves. Program written in BASIC for interactive execution and implemented on IBM PC-series computer.

  5. MODFLOW 2000 Head Uncertainty, a First-Order Second Moment Method

    USGS Publications Warehouse

    Glasgow, H.S.; Fortney, M.D.; Lee, J.; Graettinger, A.J.; Reeves, H.W.

    2003-01-01

    A computationally efficient method to estimate the variance and covariance in piezometric head results computed through MODFLOW 2000 using a first-order second moment (FOSM) approach is presented. This methodology employs a first-order Taylor series expansion to combine model sensitivity with uncertainty in geologic data. MODFLOW 2000 is used to calculate both the ground water head and the sensitivity of head to changes in input data. From a limited number of samples, geologic data are extrapolated and their associated uncertainties are computed through a conditional probability calculation. Combining the spatially related sensitivity and input uncertainty produces the variance-covariance matrix, the diagonal of which is used to yield the standard deviation in MODFLOW 2000 head. The variance in piezometric head can be used for calibrating the model, estimating confidence intervals, directing exploration, and evaluating the reliability of a design. A case study illustrates the approach, where aquifer transmissivity is the spatially related uncertain geologic input data. The FOSM methodology is shown to be applicable for calculating output uncertainty for (1) spatially related input and output data, and (2) multiple input parameters (transmissivity and recharge).

  6. Rainfall or parameter uncertainty? The power of sensitivity analysis on grouped factors

    NASA Astrophysics Data System (ADS)

    Nossent, Jiri; Pereira, Fernando; Bauwens, Willy

    2017-04-01

    Hydrological models are typically used to study and represent (a part of) the hydrological cycle. In general, the output of these models mostly depends on their input rainfall and parameter values. Both model parameters and input precipitation however, are characterized by uncertainties and, therefore, lead to uncertainty on the model output. Sensitivity analysis (SA) allows to assess and compare the importance of the different factors for this output uncertainty. Hereto, the rainfall uncertainty can be incorporated in the SA by representing it as a probabilistic multiplier. Such multiplier can be defined for the entire time series, or several of these factors can be determined for every recorded rainfall pulse or for hydrological independent storm events. As a consequence, the number of parameters included in the SA related to the rainfall uncertainty can be (much) lower or (much) higher than the number of model parameters. Although such analyses can yield interesting results, it remains challenging to determine which type of uncertainty will affect the model output most due to the different weight both types will have within the SA. In this study, we apply the variance based Sobol' sensitivity analysis method to two different hydrological simulators (NAM and HyMod) for four diverse watersheds. Besides the different number of model parameters (NAM: 11 parameters; HyMod: 5 parameters), the setup of our sensitivity and uncertainty analysis-combination is also varied by defining a variety of scenarios including diverse numbers of rainfall multipliers. To overcome the issue of the different number of factors and, thus, the different weights of the two types of uncertainty, we build on one of the advantageous properties of the Sobol' SA, i.e. treating grouped parameters as a single parameter. The latter results in a setup with a single factor for each uncertainty type and allows for a straightforward comparison of their importance. In general, the results show a clear influence of the weights in the different SA scenarios. However, working with grouped factors resolves this issue and leads to clear importance results.

  7. 'spup' - an R package for uncertainty propagation in spatial environmental modelling

    NASA Astrophysics Data System (ADS)

    Sawicka, Kasia; Heuvelink, Gerard

    2016-04-01

    Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected static and interactive visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.

  8. 'spup' - an R package for uncertainty propagation analysis in spatial environmental modelling

    NASA Astrophysics Data System (ADS)

    Sawicka, Kasia; Heuvelink, Gerard

    2017-04-01

    Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability and being able to deal with case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.

  9. Introducing uncertainty analysis of nucleation and crystal growth models in Process Analytical Technology (PAT) system design of crystallization processes.

    PubMed

    Samad, Noor Asma Fazli Abdul; Sin, Gürkan; Gernaey, Krist V; Gani, Rafiqul

    2013-11-01

    This paper presents the application of uncertainty and sensitivity analysis as part of a systematic model-based process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty, while for sensitivity analysis, global methods including the standardized regression coefficients (SRC) and Morris screening are used to identify the most significant parameters. The potassium dihydrogen phosphate (KDP) crystallization process is used as a case study, both in open-loop and closed-loop operation. In the uncertainty analysis, the impact on the predicted output of uncertain parameters related to the nucleation and the crystal growth model has been investigated for both a one- and two-dimensional crystal size distribution (CSD). The open-loop results show that the input uncertainties lead to significant uncertainties on the CSD, with appearance of a secondary peak due to secondary nucleation for both cases. The sensitivity analysis indicated that the most important parameters affecting the CSDs are nucleation order and growth order constants. In the proposed PAT system design (closed-loop), the target CSD variability was successfully reduced compared to the open-loop case, also when considering uncertainty in nucleation and crystal growth model parameters. The latter forms a strong indication of the robustness of the proposed PAT system design in achieving the target CSD and encourages its transfer to full-scale implementation. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Uncertainty and sensitivity assessment of flood risk assessments

    NASA Astrophysics Data System (ADS)

    de Moel, H.; Aerts, J. C.

    2009-12-01

    Floods are one of the most frequent and costly natural disasters. In order to protect human lifes and valuable assets from the effect of floods many defensive structures have been build. Despite these efforts economic losses due to catastrophic flood events have, however, risen substantially during the past couple of decades because of continuing economic developments in flood prone areas. On top of that, climate change is expected to affect the magnitude and frequency of flood events. Because these ongoing trends are expected to continue, a transition can be observed in various countries to move from a protective flood management approach to a more risk based flood management approach. In a risk based approach, flood risk assessments play an important role in supporting decision making. Most flood risk assessments assess flood risks in monetary terms (damage estimated for specific situations or expected annual damage) in order to feed cost-benefit analysis of management measures. Such flood risk assessments contain, however, considerable uncertainties. This is the result from uncertainties in the many different input parameters propagating through the risk assessment and accumulating in the final estimate. Whilst common in some other disciplines, as with integrated assessment models, full uncertainty and sensitivity analyses of flood risk assessments are not so common. Various studies have addressed uncertainties regarding flood risk assessments, but have mainly focussed on the hydrological conditions. However, uncertainties in other components of the risk assessment, like the relation between water depth and monetary damage, can be substantial as well. This research therefore tries to assess the uncertainties of all components of monetary flood risk assessments, using a Monte Carlo based approach. Furthermore, the total uncertainty will also be attributed to the different input parameters using a variance based sensitivity analysis. Assessing and visualizing the uncertainties of the final risk estimate will be helpful to decision makers to make better informed decisions and attributing this uncertainty to the input parameters helps to identify which parameters are most important when it comes to uncertainty in the final estimate and should therefore deserve additional attention in further research.

  11. Ensemble Kalman Filter for Dynamic State Estimation of Power Grids Stochastically Driven by Time-correlated Mechanical Input Power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu

    State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less

  12. Ensemble Kalman Filter for Dynamic State Estimation of Power Grids Stochastically Driven by Time-correlated Mechanical Input Power

    DOE PAGES

    Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu

    2017-10-31

    State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less

  13. Effects of various experimental parameters on errors in triangulation solution of elongated object in space. [barium ion cloud

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1975-01-01

    The effects of various experimental parameters on the displacement errors in the triangulation solution of an elongated object in space due to pointing uncertainties in the lines of sight have been determined. These parameters were the number and location of observation stations, the object's location in latitude and longitude, and the spacing of the input data points on the azimuth-elevation image traces. The displacement errors due to uncertainties in the coordinates of a moving station have been determined as functions of the number and location of the stations. The effects of incorporating the input data from additional cameras at one of the stations were also investigated.

  14. Translating landfill methane generation parameters among first-order decay models.

    PubMed

    Krause, Max J; Chickering, Giles W; Townsend, Timothy G

    2016-11-01

    Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.

  15. Estimating the uncertainty in thermochemical calculations for oxygen-hydrogen combustors

    NASA Astrophysics Data System (ADS)

    Sims, Joseph David

    The thermochemistry program CEA2 was combined with the statistical thermodynamics program PAC99 in a Monte Carlo simulation to determine the uncertainty in several CEA2 output variables due to uncertainty in thermodynamic reference values for the reactant and combustion species. In all, six typical performance parameters were examined, along with the required intermediate calculations (five gas properties and eight stoichiometric coefficients), for three hydrogen-oxygen combustors: a main combustor, an oxidizer preburner and a fuel preburner. The three combustors were analyzed in two different modes: design mode, where, for the first time, the uncertainty in thermodynamic reference values---taken from the literature---was considered (inputs to CEA2 were specified and so had no uncertainty); and data reduction mode, where inputs to CEA2 did have uncertainty. The inputs to CEA2 were contrived experimental measurements that were intended to represent the typical combustor testing facility. In design mode, uncertainties in the performance parameters were on the order of 0.1% for the main combustor, on the order of 0.05% for the oxidizer preburner and on the order of 0.01% for the fuel preburner. Thermodynamic reference values for H2O were the dominant sources of uncertainty, as was the assigned enthalpy for liquid oxygen. In data reduction mode, uncertainties in performance parameters increased significantly as a result of the uncertainties in experimental measurements compared to uncertainties in thermodynamic reference values. Main combustor and fuel preburner theoretical performance values had uncertainties of about 0.5%, while the oxidizer preburner had nearly 2%. Associated experimentally-determined performance values for all three combustors were 3% to 4%. The dominant sources of uncertainty in this mode were the propellant flowrates. These results only apply to hydrogen-oxygen combustors and should not be generalized to every propellant combination. Species for a hydrogen-oxygen system are relatively simple, thereby resulting in low thermodynamic reference value uncertainties. Hydrocarbon combustors, solid rocket motors and hybrid rocket motors have combustion gases containing complex molecules that will likely have thermodynamic reference values with large uncertainties. Thus, every chemical system should be analyzed in a similar manner as that shown in this work.

  16. Numerical Uncertainty Analysis for Computational Fluid Dynamics using Student T Distribution -- Application of CFD Uncertainty Analysis Compared to Exact Analytical Solution

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.

    2014-01-01

    Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.

  17. Parameter and input data uncertainty estimation for the assessment of water resources in two sub-basins of the Limpopo River Basin

    NASA Astrophysics Data System (ADS)

    Oosthuizen, Nadia; Hughes, Denis A.; Kapangaziwiri, Evison; Mwenge Kahinda, Jean-Marc; Mvandaba, Vuyelwa

    2018-05-01

    The demand for water resources is rapidly growing, placing more strain on access to water and its management. In order to appropriately manage water resources, there is a need to accurately quantify available water resources. Unfortunately, the data required for such assessment are frequently far from sufficient in terms of availability and quality, especially in southern Africa. In this study, the uncertainty related to the estimation of water resources of two sub-basins of the Limpopo River Basin - the Mogalakwena in South Africa and the Shashe shared between Botswana and Zimbabwe - is assessed. Input data (and model parameters) are significant sources of uncertainty that should be quantified. In southern Africa water use data are among the most unreliable sources of model input data because available databases generally consist of only licensed information and actual use is generally unknown. The study assesses how these uncertainties impact the estimation of surface water resources of the sub-basins. Data on farm reservoirs and irrigated areas from various sources were collected and used to run the model. Many farm dams and large irrigation areas are located in the upper parts of the Mogalakwena sub-basin. Results indicate that water use uncertainty is small. Nevertheless, the medium to low flows are clearly impacted. The simulated mean monthly flows at the outlet of the Mogalakwena sub-basin were between 22.62 and 24.68 Mm3 per month when incorporating only the uncertainty related to the main physical runoff generating parameters. The range of total predictive uncertainty of the model increased to between 22.15 and 24.99 Mm3 when water use data such as small farm and large reservoirs and irrigation were included. For the Shashe sub-basin incorporating only uncertainty related to the main runoff parameters resulted in mean monthly flows between 11.66 and 14.54 Mm3. The range of predictive uncertainty changed to between 11.66 and 17.72 Mm3 after the uncertainty in water use information was added.

  18. Parameter uncertainty and variability in evaluative fate and exposure models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hertwich, E.G.; McKone, T.E.; Pease, W.S.

    The human toxicity potential, a weighting scheme used to evaluate toxic emissions for life cycle assessment and toxics release inventories, is based on potential dose calculations and toxicity factors. This paper evaluates the variance in potential dose calculations that can be attributed to the uncertainty in chemical-specific input parameters as well as the variability in exposure factors and landscape parameters. A knowledge of the uncertainty allows us to assess the robustness of a decision based on the toxicity potential; a knowledge of the sources of uncertainty allows one to focus resources if the uncertainty is to be reduced. The potentialmore » does of 236 chemicals was assessed. The chemicals were grouped by dominant exposure route, and a Monte Carlo analysis was conducted for one representative chemical in each group. The variance is typically one to two orders of magnitude. For comparison, the point estimates in potential dose for 236 chemicals span ten orders of magnitude. Most of the variance in the potential dose is due to chemical-specific input parameters, especially half-lives, although exposure factors such as fish intake and the source of drinking water can be important for chemicals whose dominant exposure is through indirect routes. Landscape characteristics are generally of minor importance.« less

  19. Analysis of uncertainties in Monte Carlo simulated organ dose for chest CT

    NASA Astrophysics Data System (ADS)

    Muryn, John S.; Morgan, Ashraf G.; Segars, W. P.; Liptak, Chris L.; Dong, Frank F.; Primak, Andrew N.; Li, Xiang

    2015-03-01

    In Monte Carlo simulation of organ dose for a chest CT scan, many input parameters are required (e.g., half-value layer of the x-ray energy spectrum, effective beam width, and anatomical coverage of the scan). The input parameter values are provided by the manufacturer, measured experimentally, or determined based on typical clinical practices. The goal of this study was to assess the uncertainties in Monte Carlo simulated organ dose as a result of using input parameter values that deviate from the truth (clinical reality). Organ dose from a chest CT scan was simulated for a standard-size female phantom using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which errors were purposefully introduced into the input parameter values, the effects of which on organ dose per CTDIvol were analyzed. Our study showed that when errors in half value layer were within ± 0.5 mm Al, the errors in organ dose per CTDIvol were less than 6%. Errors in effective beam width of up to 3 mm had negligible effect (< 2.5%) on organ dose. In contrast, when the assumed anatomical center of the patient deviated from the true anatomical center by 5 cm, organ dose errors of up to 20% were introduced. Lastly, when the assumed extra scan length was longer by 4 cm than the true value, dose errors of up to 160% were found. The results answer the important question: to what level of accuracy each input parameter needs to be determined in order to obtain accurate organ dose results.

  20. User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Coleman, Kayla; Gilkey, Lindsay N.

    Sandia’s Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility. In its simplest mode, Dakota can automate typical parameter variation studies through a generic interface to a physics-based computational model. This can lend efficiency and rigor to manual parameter perturbation studies already being conducted by analysts. However, Dakota also delivers advanced parametric analysis techniques enabling design exploration, optimization, model calibration, riskmore » analysis, and quantification of margins and uncertainty with such models. It directly supports verification and validation activities. Dakota algorithms enrich complex science and engineering models, enabling an analyst to answer crucial questions of - Sensitivity: Which are the most important input factors or parameters entering the simulation, and how do they influence key outputs?; Uncertainty: What is the uncertainty or variability in simulation output, given uncertainties in input parameters? How safe, reliable, robust, or variable is my system? (Quantification of margins and uncertainty, QMU); Optimization: What parameter values yield the best performing design or operating condition, given constraints? Calibration: What models and/or parameters best match experimental data? In general, Dakota is the Consortium for Advanced Simulation of Light Water Reactors (CASL) delivery vehicle for verification, validation, and uncertainty quantification (VUQ) algorithms. It permits ready application of the VUQ methods described above to simulation codes by CASL researchers, code developers, and application engineers.« less

  1. Theoretical Grounds for the Propagation of Uncertainties in Monte Carlo Particle Transport

    NASA Astrophysics Data System (ADS)

    Saracco, Paolo; Pia, Maria Grazia; Batic, Matej

    2014-04-01

    We introduce a theoretical framework for the calculation of uncertainties affecting observables produced by Monte Carlo particle transport, which derive from uncertainties in physical parameters input into simulation. The theoretical developments are complemented by a heuristic application, which illustrates the method of calculation in a streamlined simulation environment.

  2. Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS

    DOE PAGES

    Brown, C. S.; Zhang, Hongbin

    2016-05-24

    Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less

  3. Uncertainty based modeling of rainfall-runoff: Combined differential evolution adaptive Metropolis (DREAM) and K-means clustering

    NASA Astrophysics Data System (ADS)

    Zahmatkesh, Zahra; Karamouz, Mohammad; Nazif, Sara

    2015-09-01

    Simulation of rainfall-runoff process in urban areas is of great importance considering the consequences and damages of extreme runoff events and floods. The first issue in flood hazard analysis is rainfall simulation. Large scale climate signals have been proved to be effective in rainfall simulation and prediction. In this study, an integrated scheme is developed for rainfall-runoff modeling considering different sources of uncertainty. This scheme includes three main steps of rainfall forecasting, rainfall-runoff simulation and future runoff prediction. In the first step, data driven models are developed and used to forecast rainfall using large scale climate signals as rainfall predictors. Due to high effect of different sources of uncertainty on the output of hydrologic models, in the second step uncertainty associated with input data, model parameters and model structure is incorporated in rainfall-runoff modeling and simulation. Three rainfall-runoff simulation models are developed for consideration of model conceptual (structural) uncertainty in real time runoff forecasting. To analyze the uncertainty of the model structure, streamflows generated by alternative rainfall-runoff models are combined, through developing a weighting method based on K-means clustering. Model parameters and input uncertainty are investigated using an adaptive Markov Chain Monte Carlo method. Finally, calibrated rainfall-runoff models are driven using the forecasted rainfall to predict future runoff for the watershed. The proposed scheme is employed in the case study of the Bronx River watershed, New York City. Results of uncertainty analysis of rainfall-runoff modeling reveal that simultaneous estimation of model parameters and input uncertainty significantly changes the probability distribution of the model parameters. It is also observed that by combining the outputs of the hydrological models using the proposed clustering scheme, the accuracy of runoff simulation in the watershed is remarkably improved up to 50% in comparison to the simulations by the individual models. Results indicate that the developed methodology not only provides reliable tools for rainfall and runoff modeling, but also adequate time for incorporating required mitigation measures in dealing with potentially extreme runoff events and flood hazard. Results of this study can be used in identification of the main factors affecting flood hazard analysis.

  4. A Probabilistic Approach to Quantify the Impact of Uncertainty Propagation in Musculoskeletal Simulations

    PubMed Central

    Myers, Casey A.; Laz, Peter J.; Shelburne, Kevin B.; Davidson, Bradley S.

    2015-01-01

    Uncertainty that arises from measurement error and parameter estimation can significantly affect the interpretation of musculoskeletal simulations; however, these effects are rarely addressed. The objective of this study was to develop an open-source probabilistic musculoskeletal modeling framework to assess how measurement error and parameter uncertainty propagate through a gait simulation. A baseline gait simulation was performed for a male subject using OpenSim for three stages: inverse kinematics, inverse dynamics, and muscle force prediction. A series of Monte Carlo simulations were performed that considered intrarater variability in marker placement, movement artifacts in each phase of gait, variability in body segment parameters, and variability in muscle parameters calculated from cadaveric investigations. Propagation of uncertainty was performed by also using the output distributions from one stage as input distributions to subsequent stages. Confidence bounds (5–95%) and sensitivity of outputs to model input parameters were calculated throughout the gait cycle. The combined impact of uncertainty resulted in mean bounds that ranged from 2.7° to 6.4° in joint kinematics, 2.7 to 8.1 N m in joint moments, and 35.8 to 130.8 N in muscle forces. The impact of movement artifact was 1.8 times larger than any other propagated source. Sensitivity to specific body segment parameters and muscle parameters were linked to where in the gait cycle they were calculated. We anticipate that through the increased use of probabilistic tools, researchers will better understand the strengths and limitations of their musculoskeletal simulations and more effectively use simulations to evaluate hypotheses and inform clinical decisions. PMID:25404535

  5. Using a Software Tool in Forecasting: a Case Study of Sales Forecasting Taking into Account Data Uncertainty

    NASA Astrophysics Data System (ADS)

    Fabianová, Jana; Kačmáry, Peter; Molnár, Vieroslav; Michalik, Peter

    2016-10-01

    Forecasting is one of the logistics activities and a sales forecast is the starting point for the elaboration of business plans. Forecast accuracy affects the business outcomes and ultimately may significantly affect the economic stability of the company. The accuracy of the prediction depends on the suitability of the use of forecasting methods, experience, quality of input data, time period and other factors. The input data are usually not deterministic but they are often of random nature. They are affected by uncertainties of the market environment, and many other factors. Taking into account the input data uncertainty, the forecast error can by reduced. This article deals with the use of the software tool for incorporating data uncertainty into forecasting. Proposals are presented of a forecasting approach and simulation of the impact of uncertain input parameters to the target forecasted value by this case study model. The statistical analysis and risk analysis of the forecast results is carried out including sensitivity analysis and variables impact analysis.

  6. Reconstructing solar magnetic fields from historical observations. II. Testing the surface flux transport model

    NASA Astrophysics Data System (ADS)

    Virtanen, I. O. I.; Virtanen, I. I.; Pevtsov, A. A.; Yeates, A.; Mursula, K.

    2017-07-01

    Aims: We aim to use the surface flux transport model to simulate the long-term evolution of the photospheric magnetic field from historical observations. In this work we study the accuracy of the model and its sensitivity to uncertainties in its main parameters and the input data. Methods: We tested the model by running simulations with different values of meridional circulation and supergranular diffusion parameters, and studied how the flux distribution inside active regions and the initial magnetic field affected the simulation. We compared the results to assess how sensitive the simulation is to uncertainties in meridional circulation speed, supergranular diffusion, and input data. We also compared the simulated magnetic field with observations. Results: We find that there is generally good agreement between simulations and observations. Although the model is not capable of replicating fine details of the magnetic field, the long-term evolution of the polar field is very similar in simulations and observations. Simulations typically yield a smoother evolution of polar fields than observations, which often include artificial variations due to observational limitations. We also find that the simulated field is fairly insensitive to uncertainties in model parameters or the input data. Due to the decay term included in the model the effects of the uncertainties are somewhat minor or temporary, lasting typically one solar cycle.

  7. Uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model at multiple flux tower sites

    USGS Publications Warehouse

    Chen, Mingshi; Senay, Gabriel B.; Singh, Ramesh K.; Verdin, James P.

    2016-01-01

    Evapotranspiration (ET) is an important component of the water cycle – ET from the land surface returns approximately 60% of the global precipitation back to the atmosphere. ET also plays an important role in energy transport among the biosphere, atmosphere, and hydrosphere. Current regional to global and daily to annual ET estimation relies mainly on surface energy balance (SEB) ET models or statistical and empirical methods driven by remote sensing data and various climatological databases. These models have uncertainties due to inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at the AmeriFlux tower sites provide an opportunity to assess the ET modeling uncertainties. In this study, we focused on uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model for ET estimation at multiple AmeriFlux tower sites with diverse land cover characteristics and climatic conditions. The 8-day composite 1-km MODerate resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) was used as input land surface temperature for the SSEBop algorithms. The other input data were taken from the AmeriFlux database. Results of statistical analysis indicated that the SSEBop model performed well in estimating ET with an R2 of 0.86 between estimated ET and eddy covariance measurements at 42 AmeriFlux tower sites during 2001–2007. It was encouraging to see that the best performance was observed for croplands, where R2 was 0.92 with a root mean square error of 13 mm/month. The uncertainties or random errors from input variables and parameters of the SSEBop model led to monthly ET estimates with relative errors less than 20% across multiple flux tower sites distributed across different biomes. This uncertainty of the SSEBop model lies within the error range of other SEB models, suggesting systematic error or bias of the SSEBop model is within the normal range. This finding implies that the simplified parameterization of the SSEBop model did not significantly affect the accuracy of the ET estimate while increasing the ease of model setup for operational applications. The sensitivity analysis indicated that the SSEBop model is most sensitive to input variables, land surface temperature (LST) and reference ET (ETo); and parameters, differential temperature (dT), and maximum ET scalar (Kmax), particularly during the non-growing season and in dry areas. In summary, the uncertainty assessment verifies that the SSEBop model is a reliable and robust method for large-area ET estimation. The SSEBop model estimates can be further improved by reducing errors in two input variables (ETo and LST) and two key parameters (Kmax and dT).

  8. Effects of Uncertainties in Hydrological Modelling. A Case Study of a Mountainous Catchment in Southern Norway

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjorn; Steinsland, Ingelin

    2016-04-01

    The aim of this study is to investigate how the inclusion of uncertainties in inputs and observed streamflow influence the parameter estimation, streamflow predictions and model evaluation. In particular we wanted to answer the following research questions: • What is the effect of including a random error in the precipitation and temperature inputs? • What is the effect of decreased information about precipitation by excluding the nearest precipitation station? • What is the effect of the uncertainty in streamflow observations? • What is the effect of reduced information about the true streamflow by using a rating curve where the measurement of the highest and lowest streamflow is excluded when estimating the rating curve? To answer these questions, we designed a set of calibration experiments and evaluation strategies. We used the elevation distributed HBV model operating on daily time steps combined with a Bayesian formulation and the MCMC routine Dream for parameter inference. The uncertainties in inputs was represented by creating ensembles of precipitation and temperature. The precipitation ensemble were created using a meta-gaussian random field approach. The temperature ensembles were created using a 3D Bayesian kriging with random sampling of the temperature laps rate. The streamflow ensembles were generated by a Bayesian multi-segment rating curve model. Precipitation and temperatures were randomly sampled for every day, whereas the streamflow ensembles were generated from rating curve ensembles, and the same rating curve was always used for the whole time series in a calibration or evaluation run. We chose a catchment with a meteorological station measuring precipitation and temperature, and a rating curve of relatively high quality. This allowed us to investigate and further test the effect of having less information on precipitation and streamflow during model calibration, predictions and evaluation. The results showed that including uncertainty in the precipitation and temperature input has a negligible effect on the posterior distribution of parameters and for the Nash-Sutcliffe (NS) efficiency for the predicted flows, while the reliability and the continuous rank probability score (CRPS) improves. Reduced information in precipitation input resulted in a and a shift in the water balance parameter Pcorr, a model producing smoother streamflow predictions giving poorer NS and CRPS, but higher reliability. The effect of calibrating the hydrological model using wrong rating curves is mainly seen as variability in the water balance parameter Pcorr. When evaluating predictions obtained using a wrong rating curve, the evaluation scores varies depending on the true rating curve. Generally, the best evaluation scores were not achieved for the rating curve used for calibration, but for a rating curves giving low variance in streamflow observations. Reduced information in streamflow influenced the water balance parameter Pcorr, and increased the spread in evaluation scores giving both better and worse scores. This case study shows that estimating the water balance is challenging since both precipitation inputs and streamflow observations have pronounced systematic component in their uncertainties.

  9. Uncertainty and sensitivity analysis of control strategies using the benchmark simulation model No1 (BSM1).

    PubMed

    Flores-Alsina, Xavier; Rodriguez-Roda, Ignasi; Sin, Gürkan; Gernaey, Krist V

    2009-01-01

    The objective of this paper is to perform an uncertainty and sensitivity analysis of the predictions of the Benchmark Simulation Model (BSM) No. 1, when comparing four activated sludge control strategies. The Monte Carlo simulation technique is used to evaluate the uncertainty in the BSM1 predictions, considering the ASM1 bio-kinetic parameters and influent fractions as input uncertainties while the Effluent Quality Index (EQI) and the Operating Cost Index (OCI) are focused on as model outputs. The resulting Monte Carlo simulations are presented using descriptive statistics indicating the degree of uncertainty in the predicted EQI and OCI. Next, the Standard Regression Coefficients (SRC) method is used for sensitivity analysis to identify which input parameters influence the uncertainty in the EQI predictions the most. The results show that control strategies including an ammonium (S(NH)) controller reduce uncertainty in both overall pollution removal and effluent total Kjeldahl nitrogen. Also, control strategies with an external carbon source reduce the effluent nitrate (S(NO)) uncertainty increasing both their economical cost and variability as a trade-off. Finally, the maximum specific autotrophic growth rate (micro(A)) causes most of the variance in the effluent for all the evaluated control strategies. The influence of denitrification related parameters, e.g. eta(g) (anoxic growth rate correction factor) and eta(h) (anoxic hydrolysis rate correction factor), becomes less important when a S(NO) controller manipulating an external carbon source addition is implemented.

  10. Rapid Debris Analysis Project Task 3 Final Report - Sensitivity of Fallout to Source Parameters, Near-Detonation Environment Material Properties, Topography, and Meteorology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldstein, Peter

    2014-01-24

    This report describes the sensitivity of predicted nuclear fallout to a variety of model input parameters, including yield, height of burst, particle and activity size distribution parameters, wind speed, wind direction, topography, and precipitation. We investigate sensitivity over a wide but plausible range of model input parameters. In addition, we investigate a specific example with a relatively narrow range to illustrate the potential for evaluating uncertainties in predictions when there are more precise constraints on model parameters.

  11. Probabilistic accident consequence uncertainty analysis: Dispersion and deposition uncertainty assessment, appendices A and B

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harper, F.T.; Young, M.L.; Miller, L.A.

    The development of two new probabilistic accident consequence codes, MACCS and COSYMA, completed in 1990, estimate the risks presented by nuclear installations based on postulated frequencies and magnitudes of potential accidents. In 1991, the US Nuclear Regulatory Commission (NRC) and the Commission of the European Communities (CEC) began a joint uncertainty analysis of the two codes. The objective was to develop credible and traceable uncertainty distributions for the input variables of the codes. Expert elicitation, developed independently, was identified as the best technology available for developing a library of uncertainty distributions for the selected consequence parameters. The study was formulatedmore » jointly and was limited to the current code models and to physical quantities that could be measured in experiments. To validate the distributions generated for the wet deposition input variables, samples were taken from these distributions and propagated through the wet deposition code model along with the Gaussian plume model (GPM) implemented in the MACCS and COSYMA codes. Resulting distributions closely replicated the aggregated elicited wet deposition distributions. Project teams from the NRC and CEC cooperated successfully to develop and implement a unified process for the elaboration of uncertainty distributions on consequence code input parameters. Formal expert judgment elicitation proved valuable for synthesizing the best available information. Distributions on measurable atmospheric dispersion and deposition parameters were successfully elicited from experts involved in the many phenomenological areas of consequence analysis. This volume is the second of a three-volume document describing the project and contains two appendices describing the rationales for the dispersion and deposition data along with short biographies of the 16 experts who participated in the project.« less

  12. Sources of Uncertainty in the Prediction of LAI / fPAR from MODIS

    NASA Technical Reports Server (NTRS)

    Dungan, Jennifer L.; Ganapol, Barry D.; Brass, James A. (Technical Monitor)

    2002-01-01

    To explicate the sources of uncertainty in the prediction of biophysical variables over space, consider the general equation: where z is a variable with values on some nominal, ordinal, interval or ratio scale; y is a vector of input variables; u is the spatial support of y and z ; x and u are the spatial locations of y and z , respectively; f is a model and B is the vector of the parameters of this model. Any y or z has a value and a spatial extent which is called its support. Viewed in this way, categories of uncertainty are from variable (e.g. measurement), parameter, positional. support and model (e.g. structural) sources. The prediction of Leaf Area Index (LAI) and the fraction of absorbed photosynthetically active radiation (fPAR) are examples of z variables predicted using model(s) as a function of y variables and spatially constant parameters. The MOD15 algorithm is an example of f, called f(sub 1), with parameters including those defined by one of six biome types and solar and view angles. The Leaf Canopy Model (LCM)2, a nested model that combines leaf radiative transfer with a full canopy reflectance model through the phase function, is a simpler though similar radiative transfer approach to f(sub 1). In a previous study, MOD15 and LCM2 gave similar results for the broadleaf forest biome. Differences between these two models can be used to consider the structural uncertainty in prediction results. In an effort to quantify each of the five sources of uncertainty and rank their relative importance for the LAI/fPAR prediction problem, we used recent data for an EOS Core Validation Site in the broadleaf biome with coincident surface reflectance, vegetation index, fPAR and LAI products from the Moderate Resolution Imaging Spectrometer (MODIS). Uncertainty due to support on the input reflectance variable was characterized using Landsat ETM+ data. Input uncertainties were propagated through the LCM2 model and compared with published uncertainties from the MOD15 algorithm.

  13. Distributed Time-Varying Formation Robust Tracking for General Linear Multiagent Systems With Parameter Uncertainties and External Disturbances.

    PubMed

    Hua, Yongzhao; Dong, Xiwang; Li, Qingdong; Ren, Zhang

    2017-05-18

    This paper investigates the time-varying formation robust tracking problems for high-order linear multiagent systems with a leader of unknown control input in the presence of heterogeneous parameter uncertainties and external disturbances. The followers need to accomplish an expected time-varying formation in the state space and track the state trajectory produced by the leader simultaneously. First, a time-varying formation robust tracking protocol with a totally distributed form is proposed utilizing the neighborhood state information. With the adaptive updating mechanism, neither any global knowledge about the communication topology nor the upper bounds of the parameter uncertainties, external disturbances and leader's unknown input are required in the proposed protocol. Then, in order to determine the control parameters, an algorithm with four steps is presented, where feasible conditions for the followers to accomplish the expected time-varying formation tracking are provided. Furthermore, based on the Lyapunov-like analysis theory, it is proved that the formation tracking error can converge to zero asymptotically. Finally, the effectiveness of the theoretical results is verified by simulation examples.

  14. Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.

    2001-01-01

    This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  15. Probabilistic migration modelling focused on functional barrier efficiency and low migration concepts in support of risk assessment.

    PubMed

    Brandsch, Rainer

    2017-10-01

    Migration modelling provides reliable migration estimates from food-contact materials (FCM) to food or food simulants based on mass-transfer parameters like diffusion and partition coefficients related to individual materials. In most cases, mass-transfer parameters are not readily available from the literature and for this reason are estimated with a given uncertainty. Historically, uncertainty was accounted for by introducing upper limit concepts first, turning out to be of limited applicability due to highly overestimated migration results. Probabilistic migration modelling gives the possibility to consider uncertainty of the mass-transfer parameters as well as other model inputs. With respect to a functional barrier, the most important parameters among others are the diffusion properties of the functional barrier and its thickness. A software tool that accepts distribution as inputs and is capable of applying Monte Carlo methods, i.e., random sampling from the input distributions of the relevant parameters (i.e., diffusion coefficient and layer thickness), predicts migration results with related uncertainty and confidence intervals. The capabilities of probabilistic migration modelling are presented in the view of three case studies (1) sensitivity analysis, (2) functional barrier efficiency and (3) validation by experimental testing. Based on the predicted migration by probabilistic migration modelling and related exposure estimates, safety evaluation of new materials in the context of existing or new packaging concepts is possible. Identifying associated migration risk and potential safety concerns in the early stage of packaging development is possible. Furthermore, dedicated material selection exhibiting required functional barrier efficiency under application conditions becomes feasible. Validation of the migration risk assessment by probabilistic migration modelling through a minimum of dedicated experimental testing is strongly recommended.

  16. Quantifying parametric uncertainty in the Rothermel model

    Treesearch

    S. Goodrick

    2008-01-01

    The purpose of the present work is to quantify parametric uncertainty in the Rothermel wildland fire spreadmodel (implemented in software such as fire spread models in the United States. This model consists of a non-linear system of equations that relates environmentalvariables (input parameter groups...

  17. Calibration under uncertainty for finite element models of masonry monuments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Atamturktur, Sezer,; Hemez, Francois,; Unal, Cetin

    2010-02-01

    Historical unreinforced masonry buildings often include features such as load bearing unreinforced masonry vaults and their supporting framework of piers, fill, buttresses, and walls. The masonry vaults of such buildings are among the most vulnerable structural components and certainly among the most challenging to analyze. The versatility of finite element (FE) analyses in incorporating various constitutive laws, as well as practically all geometric configurations, has resulted in the widespread use of the FE method for the analysis of complex unreinforced masonry structures over the last three decades. However, an FE model is only as accurate as its input parameters, andmore » there are two fundamental challenges while defining FE model input parameters: (1) material properties and (2) support conditions. The difficulties in defining these two aspects of the FE model arise from the lack of knowledge in the common engineering understanding of masonry behavior. As a result, engineers are unable to define these FE model input parameters with certainty, and, inevitably, uncertainties are introduced to the FE model.« less

  18. Certainty Equivalence M-MRAC for Systems with Unmatched Uncertainties

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram; Krishnakumar, Kalmanje

    2012-01-01

    The paper presents a certainty equivalence state feedback indirect adaptive control design method for the systems of any relative degree with unmatched uncertainties. The approach is based on the parameter identification (estimation) model, which is completely separated from the control design and is capable of producing parameter estimates as fast as the computing power allows without generating high frequency oscillations. It is shown that the system's input and output tracking errors can be systematically decreased by the proper choice of the design parameters.

  19. Impacts of uncertainties in weather and streamflow observations in calibration and evaluation of an elevation distributed HBV-model

    NASA Astrophysics Data System (ADS)

    Engeland, K.; Steinsland, I.; Petersen-Øverleir, A.; Johansen, S.

    2012-04-01

    The aim of this study is to assess the uncertainties in streamflow simulations when uncertainties in both observed inputs (precipitation and temperature) and streamflow observations used in the calibration of the hydrological model are explicitly accounted for. To achieve this goal we applied the elevation distributed HBV model operating on daily time steps to a small catchment in high elevation in Southern Norway where the seasonal snow cover is important. The uncertainties in precipitation inputs were quantified using conditional simulation. This procedure accounts for the uncertainty related to the density of the precipitation network, but neglects uncertainties related to measurement bias/errors and eventual elevation gradients in precipitation. The uncertainties in temperature inputs were quantified using a Bayesian temperature interpolation procedure where the temperature lapse rate is re-estimated every day. The uncertainty in the lapse rate was accounted for whereas the sampling uncertainty related to network density was neglected. For every day a random sample of precipitation and temperature inputs were drawn to be applied as inputs to the hydrologic model. The uncertainties in observed streamflow were assessed based on the uncertainties in the rating curve model. A Bayesian procedure was applied to estimate the probability for rating curve models with 1 to 3 segments and the uncertainties in their parameters. This method neglects uncertainties related to errors in observed water levels. Note that one rating curve was drawn to make one realisation of a whole time series of streamflow, thus the rating curve errors lead to a systematic bias in the streamflow observations. All these uncertainty sources were linked together in both calibration and evaluation of the hydrologic model using a DREAM based MCMC routine. Effects of having less information (e.g. missing one streamflow measurement for defining the rating curve or missing one precipitation station) was also investigated.

  20. Uncertainties in Parameters Estimated with Neural Networks: Application to Strong Gravitational Lensing

    DOE PAGES

    Perreault Levasseur, Laurence; Hezaveh, Yashar D.; Wechsler, Risa H.

    2017-11-15

    In Hezaveh et al. (2017) we showed that deep learning can be used for model parameter estimation and trained convolutional neural networks to determine the parameters of strong gravitational lensing systems. Here we demonstrate a method for obtaining the uncertainties of these parameters. We review the framework of variational inference to obtain approximate posteriors of Bayesian neural networks and apply it to a network trained to estimate the parameters of the Singular Isothermal Ellipsoid plus external shear and total flux magnification. We show that the method can capture the uncertainties due to different levels of noise in the input data,more » as well as training and architecture-related errors made by the network. To evaluate the accuracy of the resulting uncertainties, we calculate the coverage probabilities of marginalized distributions for each lensing parameter. By tuning a single hyperparameter, the dropout rate, we obtain coverage probabilities approximately equal to the confidence levels for which they were calculated, resulting in accurate and precise uncertainty estimates. Our results suggest that neural networks can be a fast alternative to Monte Carlo Markov Chains for parameter uncertainty estimation in many practical applications, allowing more than seven orders of magnitude improvement in speed.« less

  1. Uncertainties in Parameters Estimated with Neural Networks: Application to Strong Gravitational Lensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perreault Levasseur, Laurence; Hezaveh, Yashar D.; Wechsler, Risa H.

    In Hezaveh et al. (2017) we showed that deep learning can be used for model parameter estimation and trained convolutional neural networks to determine the parameters of strong gravitational lensing systems. Here we demonstrate a method for obtaining the uncertainties of these parameters. We review the framework of variational inference to obtain approximate posteriors of Bayesian neural networks and apply it to a network trained to estimate the parameters of the Singular Isothermal Ellipsoid plus external shear and total flux magnification. We show that the method can capture the uncertainties due to different levels of noise in the input data,more » as well as training and architecture-related errors made by the network. To evaluate the accuracy of the resulting uncertainties, we calculate the coverage probabilities of marginalized distributions for each lensing parameter. By tuning a single hyperparameter, the dropout rate, we obtain coverage probabilities approximately equal to the confidence levels for which they were calculated, resulting in accurate and precise uncertainty estimates. Our results suggest that neural networks can be a fast alternative to Monte Carlo Markov Chains for parameter uncertainty estimation in many practical applications, allowing more than seven orders of magnitude improvement in speed.« less

  2. Quantifying uncertainty in NDSHA estimates due to earthquake catalogue

    NASA Astrophysics Data System (ADS)

    Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano

    2014-05-01

    The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate of ground motion error can therefore be the factor of 2, intrinsic in MCS scale. We tested this hypothesis by the analysis of uncertainty in ground motion maps due to the catalogue random errors in magnitude and localization.

  3. Multi-criteria evaluation of wastewater treatment plant control strategies under uncertainty.

    PubMed

    Flores-Alsina, Xavier; Rodríguez-Roda, Ignasi; Sin, Gürkan; Gernaey, Krist V

    2008-11-01

    The evaluation of activated sludge control strategies in wastewater treatment plants (WWTP) via mathematical modelling is a complex activity because several objectives; e.g. economic, environmental, technical and legal; must be taken into account at the same time, i.e. the evaluation of the alternatives is a multi-criteria problem. Activated sludge models are not well characterized and some of the parameters can present uncertainty, e.g. the influent fractions arriving to the facility and the effect of either temperature or toxic compounds on the kinetic parameters, having a strong influence in the model predictions used during the evaluation of the alternatives and affecting the resulting rank of preferences. Using a simplified version of the IWA Benchmark Simulation Model No. 2 as a case study, this article shows the variations in the decision making when the uncertainty in activated sludge model (ASM) parameters is either included or not during the evaluation of WWTP control strategies. This paper comprises two main sections. Firstly, there is the evaluation of six WWTP control strategies using multi-criteria decision analysis setting the ASM parameters at their default value. In the following section, the uncertainty is introduced, i.e. input uncertainty, which is characterized by probability distribution functions based on the available process knowledge. Next, Monte Carlo simulations are run to propagate input through the model and affect the different outcomes. Thus (i) the variation in the overall degree of satisfaction of the control objectives for the generated WWTP control strategies is quantified, (ii) the contributions of environmental, legal, technical and economic objectives to the existing variance are identified and finally (iii) the influence of the relative importance of the control objectives during the selection of alternatives is analyzed. The results show that the control strategies with an external carbon source reduce the output uncertainty in the criteria used to quantify the degree of satisfaction of environmental, technical and legal objectives, but increasing the economical costs and their variability as a trade-off. Also, it is shown how a preliminary selected alternative with cascade ammonium controller becomes less desirable when input uncertainty is included, having simpler alternatives more chance of success.

  4. Bayesian analysis of input uncertainty in hydrological modeling: 2. Application

    NASA Astrophysics Data System (ADS)

    Kavetski, Dmitri; Kuczera, George; Franks, Stewart W.

    2006-03-01

    The Bayesian total error analysis (BATEA) methodology directly addresses both input and output errors in hydrological modeling, requiring the modeler to make explicit, rather than implicit, assumptions about the likely extent of data uncertainty. This study considers a BATEA assessment of two North American catchments: (1) French Broad River and (2) Potomac basins. It assesses the performance of the conceptual Variable Infiltration Capacity (VIC) model with and without accounting for input (precipitation) uncertainty. The results show the considerable effects of precipitation errors on the predicted hydrographs (especially the prediction limits) and on the calibrated parameters. In addition, the performance of BATEA in the presence of severe model errors is analyzed. While BATEA allows a very direct treatment of input uncertainty and yields some limited insight into model errors, it requires the specification of valid error models, which are currently poorly understood and require further work. Moreover, it leads to computationally challenging highly dimensional problems. For some types of models, including the VIC implemented using robust numerical methods, the computational cost of BATEA can be reduced using Newton-type methods.

  5. A parallel calibration utility for WRF-Hydro on high performance computers

    NASA Astrophysics Data System (ADS)

    Wang, J.; Wang, C.; Kotamarthi, V. R.

    2017-12-01

    A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.

  6. Modeling transport phenomena and uncertainty quantification in solidification processes

    NASA Astrophysics Data System (ADS)

    Fezi, Kyle S.

    Direct chill (DC) casting is the primary processing route for wrought aluminum alloys. This semicontinuous process consists of primary cooling as the metal is pulled through a water cooled mold followed by secondary cooling with a water jet spray and free falling water. To gain insight into this complex solidification process, a fully transient model of DC casting was developed to predict the transport phenomena of aluminum alloys for various conditions. This model is capable of solving mixture mass, momentum, energy, and species conservation equations during multicomponent solidification. Various DC casting process parameters were examined for their effect on transport phenomena predictions in an alloy of commercial interest (aluminum alloy 7050). The practice of placing a wiper to divert cooling water from the ingot surface was studied and the results showed that placement closer to the mold causes remelting at the surface and increases susceptibility to bleed outs. Numerical models of metal alloy solidification, like the one previously mentioned, are used to gain insight into physical phenomena that cannot be observed experimentally. However, uncertainty in model inputs cause uncertainty in results and those insights. The analysis of model assumptions and probable input variability on the level of uncertainty in model predictions has not been calculated in solidification modeling as yet. As a step towards understanding the effect of uncertain inputs on solidification modeling, uncertainty quantification (UQ) and sensitivity analysis were first performed on a transient solidification model of a simple binary alloy (Al-4.5wt.%Cu) in a rectangular cavity with both columnar and equiaxed solid growth models. This analysis was followed by quantifying the uncertainty in predictions from the recently developed transient DC casting model. The PRISM Uncertainty Quantification (PUQ) framework quantified the uncertainty and sensitivity in macrosegregation, solidification time, and sump profile predictions. Uncertain model inputs of interest included the secondary dendrite arm spacing, equiaxed particle size, equiaxed packing fraction, heat transfer coefficient, and material properties. The most influential input parameters for predicting the macrosegregation level were the dendrite arm spacing, which also strongly depended on the choice of mushy zone permeability model, and the equiaxed packing fraction. Additionally, the degree of uncertainty required to produce accurate predictions depended on the output of interest from the model.

  7. Evaluating uncertainty in environmental life-cycle assessment. A case study comparing two insulation options for a Dutch one-family dwelling.

    PubMed

    Huijbregts, Mark A J; Gilijamse, Wim; Ragas, Ad M J; Reijnders, Lucas

    2003-06-01

    The evaluation of uncertainty is relatively new in environmental life-cycle assessment (LCA). It provides useful information to assess the reliability of LCA-based decisions and to guide future research toward reducing uncertainty. Most uncertainty studies in LCA quantify only one type of uncertainty, i.e., uncertainty due to input data (parameter uncertainty). However, LCA outcomes can also be uncertain due to normative choices (scenario uncertainty) and the mathematical models involved (model uncertainty). The present paper outlines a new methodology that quantifies parameter, scenario, and model uncertainty simultaneously in environmental life-cycle assessment. The procedure is illustrated in a case study that compares two insulation options for a Dutch one-family dwelling. Parameter uncertainty was quantified by means of Monte Carlo simulation. Scenario and model uncertainty were quantified by resampling different decision scenarios and model formulations, respectively. Although scenario and model uncertainty were not quantified comprehensively, the results indicate that both types of uncertainty influence the case study outcomes. This stresses the importance of quantifying parameter, scenario, and model uncertainty simultaneously. The two insulation options studied were found to have significantly different impact scores for global warming, stratospheric ozone depletion, and eutrophication. The thickest insulation option has the lowest impact on global warming and eutrophication, and the highest impact on stratospheric ozone depletion.

  8. Sensitivity and uncertainty in crop water footprint accounting: a case study for the Yellow River basin

    NASA Astrophysics Data System (ADS)

    Zhuo, L.; Mekonnen, M. M.; Hoekstra, A. Y.

    2014-06-01

    Water Footprint Assessment is a fast-growing field of research, but as yet little attention has been paid to the uncertainties involved. This study investigates the sensitivity of and uncertainty in crop water footprint (in m3 t-1) estimates related to uncertainties in important input variables. The study focuses on the green (from rainfall) and blue (from irrigation) water footprint of producing maize, soybean, rice, and wheat at the scale of the Yellow River basin in the period 1996-2005. A grid-based daily water balance model at a 5 by 5 arcmin resolution was applied to compute green and blue water footprints of the four crops in the Yellow River basin in the period considered. The one-at-a-time method was carried out to analyse the sensitivity of the crop water footprint to fractional changes of seven individual input variables and parameters: precipitation (PR), reference evapotranspiration (ET0), crop coefficient (Kc), crop calendar (planting date with constant growing degree days), soil water content at field capacity (Smax), yield response factor (Ky) and maximum yield (Ym). Uncertainties in crop water footprint estimates related to uncertainties in four key input variables: PR, ET0, Kc, and crop calendar were quantified through Monte Carlo simulations. The results show that the sensitivities and uncertainties differ across crop types. In general, the water footprint of crops is most sensitive to ET0 and Kc, followed by the crop calendar. Blue water footprints were more sensitive to input variability than green water footprints. The smaller the annual blue water footprint is, the higher its sensitivity to changes in PR, ET0, and Kc. The uncertainties in the total water footprint of a crop due to combined uncertainties in climatic inputs (PR and ET0) were about ±20% (at 95% confidence interval). The effect of uncertainties in ET0was dominant compared to that of PR. The uncertainties in the total water footprint of a crop as a result of combined key input uncertainties were on average ±30% (at 95% confidence level).

  9. Sediment residence times constrained by uranium-series isotopes: A critical appraisal of the comminution approach

    NASA Astrophysics Data System (ADS)

    Handley, Heather K.; Turner, Simon; Afonso, Juan C.; Dosseto, Anthony; Cohen, Tim

    2013-02-01

    Quantifying the rates of landscape evolution in response to climate change is inhibited by the difficulty of dating the formation of continental detrital sediments. We present uranium isotope data for Cooper Creek palaeochannel sediments from the Lake Eyre Basin in semi-arid South Australia in order to attempt to determine the formation ages and hence residence times of the sediments. To calculate the amount of recoil loss of 234U, a key input parameter used in the comminution approach, we use two suggested methods (weighted geometric and surface area measurement with an incorporated fractal correction) and typical assumed input parameter values found in the literature. The calculated recoil loss factors and comminution ages are highly dependent on the method of recoil loss factor determination used and the chosen assumptions. To appraise the ramifications of the assumptions inherent in the comminution age approach and determine individual and combined comminution age uncertainties associated to each variable, Monte Carlo simulations were conducted for a synthetic sediment sample. Using a reasonable associated uncertainty for each input factor and including variations in the source rock and measured (234U/238U) ratios, the total combined uncertainty on comminution age in our simulation (for both methods of recoil loss factor estimation) can amount to ±220-280 ka. The modelling shows that small changes in assumed input values translate into large effects on absolute comminution age. To improve the accuracy of the technique and provide meaningful absolute comminution ages, much tighter constraints are required on the assumptions for input factors such as the fraction of α-recoil lost 234Th and the initial (234U/238U) ratio of the source material. In order to be able to directly compare calculated comminution ages produced by different research groups, the standardisation of pre-treatment procedures, recoil loss factor estimation and assumed input parameter values is required. We suggest a set of input parameter values for such a purpose. Additional considerations for calculating comminution ages of sediments deposited within large, semi-arid drainage basins are discussed.

  10. Efficient Optimization of Stimuli for Model-Based Design of Experiments to Resolve Dynamical Uncertainty.

    PubMed

    Mdluli, Thembi; Buzzard, Gregery T; Rundell, Ann E

    2015-09-01

    This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm's scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements.

  11. Efficient Optimization of Stimuli for Model-Based Design of Experiments to Resolve Dynamical Uncertainty

    PubMed Central

    Mdluli, Thembi; Buzzard, Gregery T.; Rundell, Ann E.

    2015-01-01

    This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm’s scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements. PMID:26379275

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, C. S.; Zhang, Hongbin

    Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less

  13. Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Nikbay, Melike; Heeg, Jennifer

    2017-01-01

    This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.

  14. Global sensitivity and uncertainty analysis of the nitrate leaching and crop yield simulation under different water and nitrogen management practices

    USDA-ARS?s Scientific Manuscript database

    Agricultural system models have become important tools in studying water and nitrogen (N) dynamics, as well as crop growth, under different management practices. Complexity in input parameters often leads to significant uncertainty when simulating dynamic processes such as nitrate leaching or crop y...

  15. Exploiting active subspaces to quantify uncertainty in the numerical simulation of the HyShot II scramjet

    NASA Astrophysics Data System (ADS)

    Constantine, P. G.; Emory, M.; Larsson, J.; Iaccarino, G.

    2015-12-01

    We present a computational analysis of the reactive flow in a hypersonic scramjet engine with focus on effects of uncertainties in the operating conditions. We employ a novel methodology based on active subspaces to characterize the effects of the input uncertainty on the scramjet performance. The active subspace identifies one-dimensional structure in the map from simulation inputs to quantity of interest that allows us to reparameterize the operating conditions; instead of seven physical parameters, we can use a single derived active variable. This dimension reduction enables otherwise infeasible uncertainty quantification, considering the simulation cost of roughly 9500 CPU-hours per run. For two values of the fuel injection rate, we use a total of 68 simulations to (i) identify the parameters that contribute the most to the variation in the output quantity of interest, (ii) estimate upper and lower bounds on the quantity of interest, (iii) classify sets of operating conditions as safe or unsafe corresponding to a threshold on the output quantity of interest, and (iv) estimate a cumulative distribution function for the quantity of interest.

  16. Optimal test selection for prediction uncertainty reduction

    DOE PAGES

    Mullins, Joshua; Mahadevan, Sankaran; Urbina, Angel

    2016-12-02

    Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecisemore » data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.« less

  17. Active subspace uncertainty quantification for a polydomain ferroelectric phase-field model

    NASA Astrophysics Data System (ADS)

    Leon, Lider S.; Smith, Ralph C.; Miles, Paul; Oates, William S.

    2018-03-01

    Quantum-informed ferroelectric phase field models capable of predicting material behavior, are necessary for facilitating the development and production of many adaptive structures and intelligent systems. Uncertainty is present in these models, given the quantum scale at which calculations take place. A necessary analysis is to determine how the uncertainty in the response can be attributed to the uncertainty in the model inputs or parameters. A second analysis is to identify active subspaces within the original parameter space, which quantify directions in which the model response varies most dominantly, thus reducing sampling effort and computational cost. In this investigation, we identify an active subspace for a poly-domain ferroelectric phase-field model. Using the active variables as our independent variables, we then construct a surrogate model and perform Bayesian inference. Once we quantify the uncertainties in the active variables, we obtain uncertainties for the original parameters via an inverse mapping. The analysis provides insight into how active subspace methodologies can be used to reduce computational power needed to perform Bayesian inference on model parameters informed by experimental or simulated data.

  18. Uncertainties in Parameters Estimated with Neural Networks: Application to Strong Gravitational Lensing

    NASA Astrophysics Data System (ADS)

    Perreault Levasseur, Laurence; Hezaveh, Yashar D.; Wechsler, Risa H.

    2017-11-01

    In Hezaveh et al. we showed that deep learning can be used for model parameter estimation and trained convolutional neural networks to determine the parameters of strong gravitational-lensing systems. Here we demonstrate a method for obtaining the uncertainties of these parameters. We review the framework of variational inference to obtain approximate posteriors of Bayesian neural networks and apply it to a network trained to estimate the parameters of the Singular Isothermal Ellipsoid plus external shear and total flux magnification. We show that the method can capture the uncertainties due to different levels of noise in the input data, as well as training and architecture-related errors made by the network. To evaluate the accuracy of the resulting uncertainties, we calculate the coverage probabilities of marginalized distributions for each lensing parameter. By tuning a single variational parameter, the dropout rate, we obtain coverage probabilities approximately equal to the confidence levels for which they were calculated, resulting in accurate and precise uncertainty estimates. Our results suggest that the application of approximate Bayesian neural networks to astrophysical modeling problems can be a fast alternative to Monte Carlo Markov Chains, allowing orders of magnitude improvement in speed.

  19. PEBBED Uncertainty and Sensitivity Analysis of the CRP-5 PBMR DLOFC Transient Benchmark with the SUSA Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom

    2011-01-01

    The need for a defendable and systematic uncertainty and sensitivity approach that conforms to the Code Scaling, Applicability, and Uncertainty (CSAU) process, and that could be used for a wide variety of software codes, was defined in 2008. The GRS (Gesellschaft für Anlagen und Reaktorsicherheit) company of Germany has developed one type of CSAU approach that is particularly well suited for legacy coupled core analysis codes, and a trial version of their commercial software product SUSA (Software for Uncertainty and Sensitivity Analyses) was acquired on May 12, 2010. This report summarized the results of the initial investigations performed with SUSA,more » utilizing a typical High Temperature Reactor benchmark (the IAEA CRP-5 PBMR 400MW Exercise 2) and the PEBBED-THERMIX suite of codes. The following steps were performed as part of the uncertainty and sensitivity analysis: 1. Eight PEBBED-THERMIX model input parameters were selected for inclusion in the uncertainty study: the total reactor power, inlet gas temperature, decay heat, and the specific heat capability and thermal conductivity of the fuel, pebble bed and reflector graphite. 2. The input parameters variations and probability density functions were specified, and a total of 800 PEBBED-THERMIX model calculations were performed, divided into 4 sets of 100 and 2 sets of 200 Steady State and Depressurized Loss of Forced Cooling (DLOFC) transient calculations each. 3. The steady state and DLOFC maximum fuel temperature, as well as the daily pebble fuel load rate data, were supplied to SUSA as model output parameters of interest. The 6 data sets were statistically analyzed to determine the 5% and 95% percentile values for each of the 3 output parameters with a 95% confidence level, and typical statistical indictors were also generated (e.g. Kendall, Pearson and Spearman coefficients). 4. A SUSA sensitivity study was performed to obtain correlation data between the input and output parameters, and to identify the primary contributors to the output data uncertainties. It was found that the uncertainties in the decay heat, pebble bed and reflector thermal conductivities were responsible for the bulk of the propagated uncertainty in the DLOFC maximum fuel temperature. It was also determined that the two standard deviation (2s) uncertainty on the maximum fuel temperature was between ±58oC (3.6%) and ±76oC (4.7%) on a mean value of 1604 oC. These values mostly depended on the selection of the distributions types, and not on the number of model calculations above the required Wilks criteria (a (95%,95%) statement would usually require 93 model runs).« less

  20. Uncertainty Modeling of Pollutant Transport in Atmosphere and Aquatic Route Using Soft Computing

    NASA Astrophysics Data System (ADS)

    Datta, D.

    2010-10-01

    Hazardous radionuclides are released as pollutants in the atmospheric and aquatic environment (ATAQE) during the normal operation of nuclear power plants. Atmospheric and aquatic dispersion models are routinely used to assess the impact of release of radionuclide from any nuclear facility or hazardous chemicals from any chemical plant on the ATAQE. Effect of the exposure from the hazardous nuclides or chemicals is measured in terms of risk. Uncertainty modeling is an integral part of the risk assessment. The paper focuses the uncertainty modeling of the pollutant transport in atmospheric and aquatic environment using soft computing. Soft computing is addressed due to the lack of information on the parameters that represent the corresponding models. Soft-computing in this domain basically addresses the usage of fuzzy set theory to explore the uncertainty of the model parameters and such type of uncertainty is called as epistemic uncertainty. Each uncertain input parameters of the model is described by a triangular membership function.

  1. Development of an automated procedure for estimation of the spatial variation of runoff in large river basins

    USDA-ARS?s Scientific Manuscript database

    The use of distributed parameter models to address water resource management problems has increased in recent years. Calibration is necessary to reduce the uncertainties associated with model input parameters. Manual calibration of a distributed parameter model is a very time consuming effort. There...

  2. A Methodology for Robust Comparative Life Cycle Assessments Incorporating Uncertainty.

    PubMed

    Gregory, Jeremy R; Noshadravan, Arash; Olivetti, Elsa A; Kirchain, Randolph E

    2016-06-21

    We propose a methodology for conducting robust comparative life cycle assessments (LCA) by leveraging uncertainty. The method evaluates a broad range of the possible scenario space in a probabilistic fashion while simultaneously considering uncertainty in input data. The method is intended to ascertain which scenarios have a definitive environmentally preferable choice among the alternatives being compared and the significance of the differences given uncertainty in the parameters, which parameters have the most influence on this difference, and how we can identify the resolvable scenarios (where one alternative in the comparison has a clearly lower environmental impact). This is accomplished via an aggregated probabilistic scenario-aware analysis, followed by an assessment of which scenarios have resolvable alternatives. Decision-tree partitioning algorithms are used to isolate meaningful scenario groups. In instances where the alternatives cannot be resolved for scenarios of interest, influential parameters are identified using sensitivity analysis. If those parameters can be refined, the process can be iterated using the refined parameters. We also present definitions of uncertainty quantities that have not been applied in the field of LCA and approaches for characterizing uncertainty in those quantities. We then demonstrate the methodology through a case study of pavements.

  3. Probabilistic biosphere modeling for the long-term safety assessment of geological disposal facilities for radioactive waste using first- and second-order Monte Carlo simulation.

    PubMed

    Ciecior, Willy; Röhlig, Klaus-Jürgen; Kirchner, Gerald

    2018-10-01

    In the present paper, deterministic as well as first- and second-order probabilistic biosphere modeling approaches are compared. Furthermore, the sensitivity of the influence of the probability distribution function shape (empirical distribution functions and fitted lognormal probability functions) representing the aleatory uncertainty (also called variability) of a radioecological model parameter as well as the role of interacting parameters are studied. Differences in the shape of the output distributions for the biosphere dose conversion factor from first-order Monte Carlo uncertainty analysis using empirical and fitted lognormal distribution functions for input parameters suggest that a lognormal approximation is possibly not always an adequate representation of the aleatory uncertainty of a radioecological parameter. Concerning the comparison of the impact of aleatory and epistemic parameter uncertainty on the biosphere dose conversion factor, the latter here is described using uncertain moments (mean, variance) while the distribution itself represents the aleatory uncertainty of the parameter. From the results obtained, the solution space of second-order Monte Carlo simulation is much larger than that from first-order Monte Carlo simulation. Therefore, the influence of epistemic uncertainty of a radioecological parameter on the output result is much larger than that one caused by its aleatory uncertainty. Parameter interactions are only of significant influence in the upper percentiles of the distribution of results as well as only in the region of the upper percentiles of the model parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Hierarchical Bayesian Model Averaging for Non-Uniqueness and Uncertainty Analysis of Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Fijani, E.; Chitsazan, N.; Nadiri, A.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    Artificial Neural Networks (ANNs) have been widely used to estimate concentration of chemicals in groundwater systems. However, estimation uncertainty is rarely discussed in the literature. Uncertainty in ANN output stems from three sources: ANN inputs, ANN parameters (weights and biases), and ANN structures. Uncertainty in ANN inputs may come from input data selection and/or input data error. ANN parameters are naturally uncertain because they are maximum-likelihood estimated. ANN structure is also uncertain because there is no unique ANN model given a specific case. Therefore, multiple plausible AI models are generally resulted for a study. One might ask why good models have to be ignored in favor of the best model in traditional estimation. What is the ANN estimation variance? How do the variances from different ANN models accumulate to the total estimation variance? To answer these questions we propose a Hierarchical Bayesian Model Averaging (HBMA) framework. Instead of choosing one ANN model (the best ANN model) for estimation, HBMA averages outputs of all plausible ANN models. The model weights are based on the evidence of data. Therefore, the HBMA avoids overconfidence on the single best ANN model. In addition, HBMA is able to analyze uncertainty propagation through aggregation of ANN models in a hierarchy framework. This method is applied for estimation of fluoride concentration in the Poldasht plain and the Bazargan plain in Iran. Unusually high fluoride concentration in the Poldasht and Bazargan plains has caused negative effects on the public health. Management of this anomaly requires estimation of fluoride concentration distribution in the area. The results show that the HBMA provides a knowledge-decision-based framework that facilitates analyzing and quantifying ANN estimation uncertainties from different sources. In addition HBMA allows comparative evaluation of the realizations for each source of uncertainty by segregating the uncertainty sources in a hierarchical framework. Fluoride concentration estimation using the HBMA method shows better agreement to the observation data in the test step because they are not based on a single model with a non-dominate weights.

  5. C-SWAT: The Soil and Water Assessment Tool with consolidated input files in alleviating computational burden of recursive simulations

    USDA-ARS?s Scientific Manuscript database

    The temptation to include model parameters and high resolution input data together with the availability of powerful optimization and uncertainty analysis algorithms has significantly enhanced the complexity of hydrologic and water quality modeling. However, the ability to take advantage of sophist...

  6. Uncertainty Determination for Aeroheating in Uranus and Saturn Probe Entries by the Monte Carlo Method

    NASA Technical Reports Server (NTRS)

    Palmer, Grant; Prabhu, Dinesh; Cruden, Brett A.

    2013-01-01

    The 2013-2022 Decaedal survey for planetary exploration has identified probe missions to Uranus and Saturn as high priorities. This work endeavors to examine the uncertainty for determining aeroheating in such entry environments. Representative entry trajectories are constructed using the TRAJ software. Flowfields at selected points on the trajectories are then computed using the Data Parallel Line Relaxation (DPLR) Computational Fluid Dynamics Code. A Monte Carlo study is performed on the DPLR input parameters to determine the uncertainty in the predicted aeroheating, and correlation coefficients are examined to identify which input parameters show the most influence on the uncertainty. A review of the present best practices for input parameters (e.g. transport coefficient and vibrational relaxation time) is also conducted. It is found that the 2(sigma) - uncertainty for heating on Uranus entry is no more than 2.1%, assuming an equilibrium catalytic wall, with the uncertainty being determined primarily by diffusion and H(sub 2) recombination rate within the boundary layer. However, if the wall is assumed to be partially or non-catalytic, this uncertainty may increase to as large as 18%. The catalytic wall model can contribute over 3x change in heat flux and a 20% variation in film coefficient. Therefore, coupled material response/fluid dynamic models are recommended for this problem. It was also found that much of this variability is artificially suppressed when a constant Schmidt number approach is implemented. Because the boundary layer is reacting, it is necessary to employ self-consistent effective binary diffusion to obtain a correct thermal transport solution. For Saturn entries, the 2(sigma) - uncertainty for convective heating was less than 3.7%. The major uncertainty driver was dependent on shock temperature/velocity, changing from boundary layer thermal conductivity to diffusivity and then to shock layer ionization rate as velocity increases. While radiative heating for Uranus entry was negligible, the nominal solution for Saturn computed up to 20% radiative heating at the highest velocity examined. The radiative heating followed a non-normal distribution, with up to a 3x variation in magnitude. This uncertainty is driven by the H(sub 2) dissociation rate, as H(sub 2) that persists in the hot non-equilibrium zone contributes significantly to radiation.

  7. Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III

    2004-01-01

    A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.

  8. Incorporating rainfall uncertainty in a SWAT model: the river Zenne basin (Belgium) case study

    NASA Astrophysics Data System (ADS)

    Tolessa Leta, Olkeba; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy

    2013-04-01

    The European Union Water Framework Directive (EU-WFD) called its member countries to achieve a good ecological status for all inland and coastal water bodies by 2015. According to recent studies, the river Zenne (Belgium) is far from this objective. Therefore, an interuniversity and multidisciplinary project "Towards a Good Ecological Status in the river Zenne (GESZ)" was launched to evaluate the effects of wastewater management plans on the river. In this project, different models have been developed and integrated using the Open Modelling Interface (OpenMI). The hydrologic, semi-distributed Soil and Water Assessment Tool (SWAT) is hereby used as one of the model components in the integrated modelling chain in order to model the upland catchment processes. The assessment of the uncertainty of SWAT is an essential aspect of the decision making process, in order to design robust management strategies that take the predicted uncertainties into account. Model uncertainty stems from the uncertainties on the model parameters, the input data (e.g, rainfall), the calibration data (e.g., stream flows) and on the model structure itself. The objective of this paper is to assess the first three sources of uncertainty in a SWAT model of the river Zenne basin. For the assessment of rainfall measurement uncertainty, first, we identified independent rainfall periods, based on the daily precipitation and stream flow observations and using the Water Engineering Time Series PROcessing tool (WETSPRO). Secondly, we assigned a rainfall multiplier parameter for each of the independent rainfall periods, which serves as a multiplicative input error corruption. Finally, we treated these multipliers as latent parameters in the model optimization and uncertainty analysis (UA). For parameter uncertainty assessment, due to the high number of parameters of the SWAT model, first, we screened out its most sensitive parameters using the Latin Hypercube One-factor-At-a-Time (LH-OAT) technique. Subsequently, we only considered the most sensitive parameters for parameter optimization and UA. To explicitly account for the stream flow uncertainty, we assumed that the stream flow measurement error increases linearly with the stream flow value. To assess the uncertainty and infer posterior distributions of the parameters, we used a Markov Chain Monte Carlo (MCMC) sampler - differential evolution adaptive metropolis (DREAM) that uses sampling from an archive of past states to generate candidate points in each individual chain. It is shown that the marginal posterior distributions of the rainfall multipliers vary widely between individual events, as a consequence of rainfall measurement errors and the spatial variability of the rain. Only few of the rainfall events are well defined. The marginal posterior distributions of the SWAT model parameter values are well defined and identified by DREAM, within their prior ranges. The posterior distributions of output uncertainty parameter values also show that the stream flow data is highly uncertain. The approach of using rainfall multipliers to treat rainfall uncertainty for a complex model has an impact on the model parameter marginal posterior distributions and on the model results Corresponding author: Tel.: +32 (0)2629 3027; fax: +32(0)2629 3022. E-mail: otolessa@vub.ac.be

  9. Real­-Time Ensemble Forecasting of Coronal Mass Ejections Using the Wsa-Enlil+Cone Model

    NASA Astrophysics Data System (ADS)

    Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; Odstrcil, D.; MacNeice, P. J.; Rastaetter, L.; LaSota, J. A.

    2014-12-01

    Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions. Real-time ensemble modeling of CME propagation is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL+cone model available at the Community Coordinated Modeling Center (CCMC). To estimate the effect of uncertainties in determining CME input parameters on arrival time predictions, a distribution of n (routinely n=48) CME input parameter sets are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest, including a probability distribution of CME arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). We present the results of ensemble simulations for a total of 38 CME events in 2013-2014. For 28 of the ensemble runs containing hits, the observed CME arrival was within the range of ensemble arrival time predictions for 14 runs (half). The average arrival time prediction was computed for each of the 28 ensembles predicting hits and using the actual arrival time, an average absolute error of 10.0 hours (RMSE=11.4 hours) was found for all 28 ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling sysem was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME. The parameter sensitivity study suggests future directions for the system, such as running ensembles using various magnetogram inputs to the WSA model.

  10. Sources of Uncertainty in Predicting Land Surface Fluxes Using Diverse Data and Models

    NASA Technical Reports Server (NTRS)

    Dungan, Jennifer L.; Wang, Weile; Michaelis, Andrew; Votava, Petr; Nemani, Ramakrishma

    2010-01-01

    In the domain of predicting land surface fluxes, models are used to bring data from large observation networks and satellite remote sensing together to make predictions about present and future states of the Earth. Characterizing the uncertainty about such predictions is a complex process and one that is not yet fully understood. Uncertainty exists about initialization, measurement and interpolation of input variables; model parameters; model structure; and mixed spatial and temporal supports. Multiple models or structures often exist to describe the same processes. Uncertainty about structure is currently addressed by running an ensemble of different models and examining the distribution of model outputs. To illustrate structural uncertainty, a multi-model ensemble experiment we have been conducting using the Terrestrial Observation and Prediction System (TOPS) will be discussed. TOPS uses public versions of process-based ecosystem models that use satellite-derived inputs along with surface climate data and land surface characterization to produce predictions of ecosystem fluxes including gross and net primary production and net ecosystem exchange. Using the TOPS framework, we have explored the uncertainty arising from the application of models with different assumptions, structures, parameters, and variable definitions. With a small number of models, this only begins to capture the range of possible spatial fields of ecosystem fluxes. Few attempts have been made to systematically address the components of uncertainty in such a framework. We discuss the characterization of uncertainty for this approach including both quantifiable and poorly known aspects.

  11. Uncertainty, Sensitivity Analysis, and Causal Identification in the Arctic using a Perturbed Parameter Ensemble of the HiLAT Climate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunke, Elizabeth Clare; Urrego Blanco, Jorge Rolando; Urban, Nathan Mark

    Coupled climate models have a large number of input parameters that can affect output uncertainty. We conducted a sensitivity analysis of sea ice proper:es and Arc:c related climate variables to 5 parameters in the HiLAT climate model: air-ocean turbulent exchange parameter (C), conversion of water vapor to clouds (cldfrc_rhminl) and of ice crystals to snow (micro_mg_dcs), snow thermal conduc:vity (ksno), and maximum snow grain size (rsnw_mlt). We used an elementary effect (EE) approach to rank their importance for output uncertainty. EE is an extension of one-at-a-time sensitivity analyses, but it is more efficient in sampling multi-dimensional parameter spaces. We lookedmore » for emerging relationships among climate variables across the model ensemble, and used causal discovery algorithms to establish potential pathways for those relationships.« less

  12. An improved state-parameter analysis of ecosystem models using data assimilation

    USGS Publications Warehouse

    Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.

    2008-01-01

    Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.

  13. Modeling uncertainty: quicksand for water temperature modeling

    USGS Publications Warehouse

    Bartholow, John M.

    2003-01-01

    Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.

  14. The impact of personalized probabilistic wall thickness models on peak wall stress in abdominal aortic aneurysms.

    PubMed

    Biehler, J; Wall, W A

    2018-02-01

    If computational models are ever to be used in high-stakes decision making in clinical practice, the use of personalized models and predictive simulation techniques is a must. This entails rigorous quantification of uncertainties as well as harnessing available patient-specific data to the greatest extent possible. Although researchers are beginning to realize that taking uncertainty in model input parameters into account is a necessity, the predominantly used probabilistic description for these uncertain parameters is based on elementary random variable models. In this work, we set out for a comparison of different probabilistic models for uncertain input parameters using the example of an uncertain wall thickness in finite element models of abdominal aortic aneurysms. We provide the first comparison between a random variable and a random field model for the aortic wall and investigate the impact on the probability distribution of the computed peak wall stress. Moreover, we show that the uncertainty about the prevailing peak wall stress can be reduced if noninvasively available, patient-specific data are harnessed for the construction of the probabilistic wall thickness model. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Effects of input uncertainty on cross-scale crop modeling

    NASA Astrophysics Data System (ADS)

    Waha, Katharina; Huth, Neil; Carberry, Peter

    2014-05-01

    The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input data from very little to very detailed information, and compare the models' abilities to represent the spatial variability and temporal variability in crop yields. We display the uncertainty in crop yield simulations from different input data and crop models in Taylor diagrams which are a graphical summary of the similarity between simulations and observations (Taylor, 2001). The observed spatial variability can be represented well from both models (R=0.6-0.8) but APSIM predicts higher spatial variability than LPJmL due to its sensitivity to soil parameters. Simulations with the same crop model, climate and sowing dates have similar statistics and therefore similar skill to reproduce the observed spatial variability. Soil data is less important for the skill of a crop model to reproduce the observed spatial variability. However, the uncertainty in simulated spatial variability from the two crop models is larger than from input data settings and APSIM is more sensitive to input data then LPJmL. Even with a detailed, point-scale crop model and detailed input data it is difficult to capture the complexity and diversity in maize cropping systems.

  16. Impact of parametric uncertainty on estimation of the energy deposition into an irradiated brain tumor

    NASA Astrophysics Data System (ADS)

    Taverniers, Søren; Tartakovsky, Daniel M.

    2017-11-01

    Predictions of the total energy deposited into a brain tumor through X-ray irradiation are notoriously error-prone. We investigate how this predictive uncertainty is affected by uncertainty in both the location of the region occupied by a dose-enhancing iodinated contrast agent and the agent's concentration. This is done within the probabilistic framework in which these uncertain parameters are modeled as random variables. We employ the stochastic collocation (SC) method to estimate statistical moments of the deposited energy in terms of statistical moments of the random inputs, and the global sensitivity analysis (GSA) to quantify the relative importance of uncertainty in these parameters on the overall predictive uncertainty. A nonlinear radiation-diffusion equation dramatically magnifies the coefficient of variation of the uncertain parameters, yielding a large coefficient of variation for the predicted energy deposition. This demonstrates that accurate prediction of the energy deposition requires a proper treatment of even small parametric uncertainty. Our analysis also reveals that SC outperforms standard Monte Carlo, but its relative efficiency decreases as the number of uncertain parameters increases from one to three. A robust GSA ameliorates this problem by reducing this number.

  17. Uncertainty analysis of hydrological modeling in a tropical area using different algorithms

    NASA Astrophysics Data System (ADS)

    Rafiei Emam, Ammar; Kappas, Martin; Fassnacht, Steven; Linh, Nguyen Hoang Khanh

    2018-01-01

    Hydrological modeling outputs are subject to uncertainty resulting from different sources of errors (e.g., error in input data, model structure, and model parameters), making quantification of uncertainty in hydrological modeling imperative and meant to improve reliability of modeling results. The uncertainty analysis must solve difficulties in calibration of hydrological models, which further increase in areas with data scarcity. The purpose of this study is to apply four uncertainty analysis algorithms to a semi-distributed hydrological model, quantifying different source of uncertainties (especially parameter uncertainty) and evaluate their performance. In this study, the Soil and Water Assessment Tools (SWAT) eco-hydrological model was implemented for the watershed in the center of Vietnam. The sensitivity of parameters was analyzed, and the model was calibrated. The uncertainty analysis for the hydrological model was conducted based on four algorithms: Generalized Likelihood Uncertainty Estimation (GLUE), Sequential Uncertainty Fitting (SUFI), Parameter Solution method (ParaSol) and Particle Swarm Optimization (PSO). The performance of the algorithms was compared using P-factor and Rfactor, coefficient of determination (R 2), the Nash Sutcliffe coefficient of efficiency (NSE) and Percent Bias (PBIAS). The results showed the high performance of SUFI and PSO with P-factor>0.83, R-factor <0.56 and R 2>0.91, NSE>0.89, and 0.18

  18. Probabilistic Density Function Method for Stochastic ODEs of Power Systems with Uncertain Power Input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil

    Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.

  19. Uncertainty quantification in Rothermel's Model using an efficient sampling method

    Treesearch

    Edwin Jimenez; M. Yousuff Hussaini; Scott L. Goodrick

    2007-01-01

    The purpose of the present work is to quantify parametric uncertainty in Rothermel’s wildland fire spread model (implemented in software such as BehavePlus3 and FARSITE), which is undoubtedly among the most widely used fire spread models in the United States. This model consists of a nonlinear system of equations that relates environmental variables (input parameter...

  20. Bayesian Treatment of Uncertainty in Environmental Modeling: Optimization, Sampling and Data Assimilation Using the DREAM Software Package

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2012-12-01

    In the past decade much progress has been made in the treatment of uncertainty in earth systems modeling. Whereas initial approaches has focused mostly on quantification of parameter and predictive uncertainty, recent methods attempt to disentangle the effects of parameter, forcing (input) data, model structural and calibration data errors. In this talk I will highlight some of our recent work involving theory, concepts and applications of Bayesian parameter and/or state estimation. In particular, new methods for sequential Monte Carlo (SMC) and Markov Chain Monte Carlo (MCMC) simulation will be presented with emphasis on massively parallel distributed computing and quantification of model structural errors. The theoretical and numerical developments will be illustrated using model-data synthesis problems in hydrology, hydrogeology and geophysics.

  1. Importance analysis for Hudson River PCB transport and fate model parameters using robust sensitivity studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.; Toll, J.; Cothern, K.

    1995-12-31

    The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less

  2. Exemplifying the Effects of Parameterization Shortcomings in the Numerical Simulation of Geological Energy and Mass Storage

    NASA Astrophysics Data System (ADS)

    Dethlefsen, Frank; Tilmann Pfeiffer, Wolf; Schäfer, Dirk

    2016-04-01

    Numerical simulations of hydraulic, thermal, geomechanical, or geochemical (THMC-) processes in the subsurface have been conducted for decades. Often, such simulations are commenced by applying a parameter set that is as realistic as possible. Then, a base scenario is calibrated on field observations. Finally, scenario simulations can be performed, for instance to forecast the system behavior after varying input data. In the context of subsurface energy and mass storage, however, these model calibrations based on field data are often not available, as these storage actions have not been carried out so far. Consequently, the numerical models merely rely on the parameter set initially selected, and uncertainties as a consequence of a lack of parameter values or process understanding may not be perceivable, not mentioning quantifiable. Therefore, conducting THMC simulations in the context of energy and mass storage deserves a particular review of the model parameterization with its input data, and such a review so far hardly exists to the required extent. Variability or aleatory uncertainty exists for geoscientific parameter values in general, and parameters for that numerous data points are available, such as aquifer permeabilities, may be described statistically thereby exhibiting statistical uncertainty. In this case, sensitivity analyses for quantifying the uncertainty in the simulation resulting from varying this parameter can be conducted. There are other parameters, where the lack of data quantity and quality implies a fundamental changing of ongoing processes when such a parameter value is varied in numerical scenario simulations. As an example for such a scenario uncertainty, varying the capillary entry pressure as one of the multiphase flow parameters can either allow or completely inhibit the penetration of an aquitard by gas. As the last example, the uncertainty of cap-rock fault permeabilities and consequently potential leakage rates of stored gases into shallow compartments are regarded as recognized ignorance by the authors of this study, as no realistic approach exists to determine this parameter and values are best guesses only. In addition to these aleatory uncertainties, an equivalent classification is possible for rating epistemic uncertainties describing the degree of understanding processes such as the geochemical and hydraulic effects following potential gas intrusions from deeper reservoirs into shallow aquifers. As an outcome of this grouping of uncertainties, prediction errors of scenario simulations can be calculated by sensitivity analyses, if the uncertainties are identified as statistical. However, if scenario uncertainties exist or even recognized ignorance has to be attested to a parameter or a process in question, the outcomes of simulations mainly depend on the decision of the modeler by choosing parameter values or by interpreting the occurring of processes. In that case, the informative value of numerical simulations is limited by ambiguous simulation results, which cannot be refined without improving the geoscientific database through laboratory or field studies on a longer term basis, so that the effects of the subsurface use may be predicted realistically. This discussion, amended by a compilation of available geoscientific data to parameterize such simulations, will be presented in this study.

  3. Modeling sugar cane yield with a process-based model from site to continental scale: uncertainties arising from model structure and parameter values

    NASA Astrophysics Data System (ADS)

    Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Huth, N.; Marin, F.; Martiné, J.-F.

    2014-01-01

    Agro-Land Surface Models (agro-LSM) have been developed from the integration of specific crop processes into large-scale generic land surface models that allow calculating the spatial distribution and variability of energy, water and carbon fluxes within the soil-vegetation-atmosphere continuum. When developing agro-LSM models, a particular attention must be given to the effects of crop phenology and management on the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty of Agro-LSM models is related to their usually large number of parameters. In this study, we quantify the parameter-values uncertainty in the simulation of sugar cane biomass production with the agro-LSM ORCHIDEE-STICS, using a multi-regional approach with data from sites in Australia, La Réunion and Brazil. In ORCHIDEE-STICS, two models are chained: STICS, an agronomy model that calculates phenology and management, and ORCHIDEE, a land surface model that calculates biomass and other ecosystem variables forced by STICS' phenology. First, the parameters that dominate the uncertainty of simulated biomass at harvest date are determined through a screening of 67 different parameters of both STICS and ORCHIDEE on a multi-site basis. Secondly, the uncertainty of harvested biomass attributable to those most sensitive parameters is quantified and specifically attributed to either STICS (phenology, management) or to ORCHIDEE (other ecosystem variables including biomass) through distinct Monte-Carlo runs. The uncertainty on parameter values is constrained using observations by calibrating the model independently at seven sites. In a third step, a sensitivity analysis is carried out by varying the most sensitive parameters to investigate their effects at continental scale. A Monte-Carlo sampling method associated with the calculation of Partial Ranked Correlation Coefficients is used to quantify the sensitivity of harvested biomass to input parameters on a continental scale across the large regions of intensive sugar cane cultivation in Australia and Brazil. Ten parameters driving most of the uncertainty in the ORCHIDEE-STICS modeled biomass at the 7 sites are identified by the screening procedure. We found that the 10 most sensitive parameters control phenology (maximum rate of increase of LAI) and root uptake of water and nitrogen (root profile and root growth rate, nitrogen stress threshold) in STICS, and photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), and transpiration and respiration (stomatal conductance, growth and maintenance respiration coefficients) in ORCHIDEE. We find that the optimal carboxylation rate and photosynthesis temperature parameters contribute most to the uncertainty in harvested biomass simulations at site scale. The spatial variation of the ranked correlation between input parameters and modeled biomass at harvest is well explained by rain and temperature drivers, suggesting climate-mediated different sensitivities of modeled sugar cane yield to the model parameters, for Australia and Brazil. This study reveals the spatial and temporal patterns of uncertainty variability for a highly parameterized agro-LSM and calls for more systematic uncertainty analyses of such models.

  4. Modeling sugarcane yield with a process-based model from site to continental scale: uncertainties arising from model structure and parameter values

    NASA Astrophysics Data System (ADS)

    Valade, A.; Ciais, P.; Vuichard, N.; Viovy, N.; Caubel, A.; Huth, N.; Marin, F.; Martiné, J.-F.

    2014-06-01

    Agro-land surface models (agro-LSM) have been developed from the integration of specific crop processes into large-scale generic land surface models that allow calculating the spatial distribution and variability of energy, water and carbon fluxes within the soil-vegetation-atmosphere continuum. When developing agro-LSM models, particular attention must be given to the effects of crop phenology and management on the turbulent fluxes exchanged with the atmosphere, and the underlying water and carbon pools. A part of the uncertainty of agro-LSM models is related to their usually large number of parameters. In this study, we quantify the parameter-values uncertainty in the simulation of sugarcane biomass production with the agro-LSM ORCHIDEE-STICS, using a multi-regional approach with data from sites in Australia, La Réunion and Brazil. In ORCHIDEE-STICS, two models are chained: STICS, an agronomy model that calculates phenology and management, and ORCHIDEE, a land surface model that calculates biomass and other ecosystem variables forced by STICS phenology. First, the parameters that dominate the uncertainty of simulated biomass at harvest date are determined through a screening of 67 different parameters of both STICS and ORCHIDEE on a multi-site basis. Secondly, the uncertainty of harvested biomass attributable to those most sensitive parameters is quantified and specifically attributed to either STICS (phenology, management) or to ORCHIDEE (other ecosystem variables including biomass) through distinct Monte Carlo runs. The uncertainty on parameter values is constrained using observations by calibrating the model independently at seven sites. In a third step, a sensitivity analysis is carried out by varying the most sensitive parameters to investigate their effects at continental scale. A Monte Carlo sampling method associated with the calculation of partial ranked correlation coefficients is used to quantify the sensitivity of harvested biomass to input parameters on a continental scale across the large regions of intensive sugarcane cultivation in Australia and Brazil. The ten parameters driving most of the uncertainty in the ORCHIDEE-STICS modeled biomass at the 7 sites are identified by the screening procedure. We found that the 10 most sensitive parameters control phenology (maximum rate of increase of LAI) and root uptake of water and nitrogen (root profile and root growth rate, nitrogen stress threshold) in STICS, and photosynthesis (optimal temperature of photosynthesis, optimal carboxylation rate), radiation interception (extinction coefficient), and transpiration and respiration (stomatal conductance, growth and maintenance respiration coefficients) in ORCHIDEE. We find that the optimal carboxylation rate and photosynthesis temperature parameters contribute most to the uncertainty in harvested biomass simulations at site scale. The spatial variation of the ranked correlation between input parameters and modeled biomass at harvest is well explained by rain and temperature drivers, suggesting different climate-mediated sensitivities of modeled sugarcane yield to the model parameters, for Australia and Brazil. This study reveals the spatial and temporal patterns of uncertainty variability for a highly parameterized agro-LSM and calls for more systematic uncertainty analyses of such models.

  5. Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions

    NASA Technical Reports Server (NTRS)

    Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong

    2016-01-01

    Model benchmarking allows us to separate uncertainty in model predictions caused 1 by model inputs from uncertainty due to model structural error. We extend this method with a large-sample approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.

  6. Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions

    PubMed Central

    Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong

    2018-01-01

    Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a “large-sample” approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances. PMID:29697706

  7. Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions.

    PubMed

    Nearing, Grey S; Mocko, David M; Peters-Lidard, Christa D; Kumar, Sujay V; Xia, Youlong

    2016-03-01

    Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a "large-sample" approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.

  8. Statistical emulation of landslide-induced tsunamis at the Rockall Bank, NE Atlantic

    PubMed Central

    Guillas, S.; Georgiopoulou, A.; Dias, F.

    2017-01-01

    Statistical methods constitute a useful approach to understand and quantify the uncertainty that governs complex tsunami mechanisms. Numerical experiments may often have a high computational cost. This forms a limiting factor for performing uncertainty and sensitivity analyses, where numerous simulations are required. Statistical emulators, as surrogates of these simulators, can provide predictions of the physical process in a much faster and computationally inexpensive way. They can form a prominent solution to explore thousands of scenarios that would be otherwise numerically expensive and difficult to achieve. In this work, we build a statistical emulator of the deterministic codes used to simulate submarine sliding and tsunami generation at the Rockall Bank, NE Atlantic Ocean, in two stages. First we calibrate, against observations of the landslide deposits, the parameters used in the landslide simulations. This calibration is performed under a Bayesian framework using Gaussian Process (GP) emulators to approximate the landslide model, and the discrepancy function between model and observations. Distributions of the calibrated input parameters are obtained as a result of the calibration. In a second step, a GP emulator is built to mimic the coupled landslide-tsunami numerical process. The emulator propagates the uncertainties in the distributions of the calibrated input parameters inferred from the first step to the outputs. As a result, a quantification of the uncertainty of the maximum free surface elevation at specified locations is obtained. PMID:28484339

  9. Statistical emulation of landslide-induced tsunamis at the Rockall Bank, NE Atlantic.

    PubMed

    Salmanidou, D M; Guillas, S; Georgiopoulou, A; Dias, F

    2017-04-01

    Statistical methods constitute a useful approach to understand and quantify the uncertainty that governs complex tsunami mechanisms. Numerical experiments may often have a high computational cost. This forms a limiting factor for performing uncertainty and sensitivity analyses, where numerous simulations are required. Statistical emulators, as surrogates of these simulators, can provide predictions of the physical process in a much faster and computationally inexpensive way. They can form a prominent solution to explore thousands of scenarios that would be otherwise numerically expensive and difficult to achieve. In this work, we build a statistical emulator of the deterministic codes used to simulate submarine sliding and tsunami generation at the Rockall Bank, NE Atlantic Ocean, in two stages. First we calibrate, against observations of the landslide deposits, the parameters used in the landslide simulations. This calibration is performed under a Bayesian framework using Gaussian Process (GP) emulators to approximate the landslide model, and the discrepancy function between model and observations. Distributions of the calibrated input parameters are obtained as a result of the calibration. In a second step, a GP emulator is built to mimic the coupled landslide-tsunami numerical process. The emulator propagates the uncertainties in the distributions of the calibrated input parameters inferred from the first step to the outputs. As a result, a quantification of the uncertainty of the maximum free surface elevation at specified locations is obtained.

  10. Effective UV radiation from model calculations and measurements

    NASA Technical Reports Server (NTRS)

    Feister, Uwe; Grewe, Rolf

    1994-01-01

    Model calculations have been made to simulate the effect of atmospheric ozone and geographical as well as meteorological parameters on solar UV radiation reaching the ground. Total ozone values as measured by Dobson spectrophotometer and Brewer spectrometer as well as turbidity were used as input to the model calculation. The performance of the model was tested by spectroradiometric measurements of solar global UV radiation at Potsdam. There are small differences that can be explained by the uncertainty of the measurements, by the uncertainty of input data to the model and by the uncertainty of the radiative transfer algorithms of the model itself. Some effects of solar radiation to the biosphere and to air chemistry are discussed. Model calculations and spectroradiometric measurements can be used to study variations of the effective radiation in space in space time. The comparability of action spectra and their uncertainties are also addressed.

  11. The application of Global Sensitivity Analysis to quantify the dominant input factors for hydraulic model simulations

    NASA Astrophysics Data System (ADS)

    Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2015-04-01

    Predicting flood inundation extents using hydraulic models is subject to a number of critical uncertainties. For a specific event, these uncertainties are known to have a large influence on model outputs and any subsequent analyses made by risk managers. Hydraulic modellers often approach such problems by applying uncertainty analysis techniques such as the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. However, these methods do not allow one to attribute which source of uncertainty has the most influence on the various model outputs that inform flood risk decision making. Another issue facing modellers is the amount of computational resource that is available to spend on modelling flood inundations that are 'fit for purpose' to the modelling objectives. Therefore a balance needs to be struck between computation time, realism and spatial resolution, and effectively characterising the uncertainty spread of predictions (for example from boundary conditions and model parameterisations). However, it is not fully understood how much of an impact each factor has on model performance, for example how much influence changing the spatial resolution of a model has on inundation predictions in comparison to other uncertainties inherent in the modelling process. Furthermore, when resampling fine scale topographic data in the form of a Digital Elevation Model (DEM) to coarser resolutions, there are a number of possible coarser DEMs that can be produced. Deciding which DEM is then chosen to represent the surface elevations in the model could also influence model performance. In this study we model a flood event using the hydraulic model LISFLOOD-FP and apply Sobol' Sensitivity Analysis to estimate which input factor, among the uncertainty in model boundary conditions, uncertain model parameters, the spatial resolution of the DEM and the choice of resampled DEM, have the most influence on a range of model outputs. These outputs include whole domain maximum inundation indicators and flood wave travel time in addition to temporally and spatially variable indicators. This enables us to assess whether the sensitivity of the model to various input factors is stationary in both time and space. Furthermore, competing models are assessed against observations of water depths from a historical flood event. Consequently we are able to determine which of the input factors has the most influence on model performance. Initial findings suggest the sensitivity of the model to different input factors varies depending on the type of model output assessed and at what stage during the flood hydrograph the model output is assessed. We have also found that initial decisions regarding the characterisation of the input factors, for example defining the upper and lower bounds of the parameter sample space, can be significant in influencing the implied sensitivities.

  12. Uncertainty Quantification of Equilibrium Climate Sensitivity in CCSM4

    NASA Astrophysics Data System (ADS)

    Covey, C. C.; Lucas, D. D.; Tannahill, J.; Klein, R.

    2013-12-01

    Uncertainty in the global mean equilibrium surface warming due to doubled atmospheric CO2, as computed by a "slab ocean" configuration of the Community Climate System Model version 4 (CCSM4), is quantified using 1,039 perturbed-input-parameter simulations. The slab ocean configuration reduces the model's e-folding time when approaching an equilibrium state to ~5 years. This time is much less than for the full ocean configuration, consistent with the shallow depth of the upper well-mixed layer of the ocean represented by the "slab." Adoption of the slab ocean configuration requires the assumption of preset values for the convergence of ocean heat transport beneath the upper well-mixed layer. A standard procedure for choosing these values maximizes agreement with the full ocean version's simulation of the present-day climate when input parameters assume their default values. For each new set of input parameter values, we computed the change in ocean heat transport implied by a "Phase 1" model run in which sea surface temperatures and sea ice concentrations were set equal to present-day values. The resulting total ocean heat transport (= standard value + change implied by Phase 1 run) was then input into "Phase 2" slab ocean runs with varying values of atmospheric CO2. Our uncertainty estimate is based on Latin Hypercube sampling over expert-provided uncertainty ranges of N = 36 adjustable parameters in the atmosphere (CAM4) and sea ice (CICE4) components of CCSM4. Two-dimensional projections of our sampling distribution for the N(N-1)/2 possible pairs of input parameters indicate full coverage of the N-dimensional parameter space, including edges. We used a machine learning-based support vector regression (SVR) statistical model to estimate the probability density function (PDF) of equilibrium warming. This fitting procedure produces a PDF that is qualitatively consistent with the raw histogram of our CCSM4 results. Most of the values from the SVR statistical model are within ~0.1 K of the raw results, well below the inter-decile range inferred below. Independent validation of the fit indicates residual errors that are distributed about zero with a standard deviation of 0.17 K. Analysis of variance shows that the equilibrium warming in CCSM4 is mainly linear in parameter changes. Thus, in accord with the Central Limit Theorem of statistics, the PDF of the warming is approximately Gaussian, i.e. symmetric about its mean value (3.0 K). Since SVR allows for highly nonlinear fits, the symmetry is not an artifact of the fitting procedure. The 10-90 percentile range of the PDF is 2.6-3.4 K, consistent with earlier estimates from CCSM4 but narrower than estimates from other models, which sometimes produce a high-temperature asymmetric tail in the PDF. This work was performed under auspices of the US Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, and was funded by LLNL's Uncertainty Quantification Strategic Initiative (Laboratory Directed Research and Development Project 10-SI-013).

  13. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling

    NASA Astrophysics Data System (ADS)

    Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.

    2017-05-01

    Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.

  14. A Geostatistics-Informed Hierarchical Sensitivity Analysis Method for Complex Groundwater Flow and Transport Modeling

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2017-12-01

    Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.

  15. Ensemble urban flood simulation in comparison with laboratory-scale experiments: Impact of interaction models for manhole, sewer pipe, and surface flow

    NASA Astrophysics Data System (ADS)

    Noh, Seong Jin; Lee, Seungsoo; An, Hyunuk; Kawaike, Kenji; Nakagawa, Hajime

    2016-11-01

    An urban flood is an integrated phenomenon that is affected by various uncertainty sources such as input forcing, model parameters, complex geometry, and exchanges of flow among different domains in surfaces and subsurfaces. Despite considerable advances in urban flood modeling techniques, limited knowledge is currently available with regard to the impact of dynamic interaction among different flow domains on urban floods. In this paper, an ensemble method for urban flood modeling is presented to consider the parameter uncertainty of interaction models among a manhole, a sewer pipe, and surface flow. Laboratory-scale experiments on urban flood and inundation are performed under various flow conditions to investigate the parameter uncertainty of interaction models. The results show that ensemble simulation using interaction models based on weir and orifice formulas reproduces experimental data with high accuracy and detects the identifiability of model parameters. Among interaction-related parameters, the parameters of the sewer-manhole interaction show lower uncertainty than those of the sewer-surface interaction. Experimental data obtained under unsteady-state conditions are more informative than those obtained under steady-state conditions to assess the parameter uncertainty of interaction models. Although the optimal parameters vary according to the flow conditions, the difference is marginal. Simulation results also confirm the capability of the interaction models and the potential of the ensemble-based approaches to facilitate urban flood simulation.

  16. Stochastic multi-objective auto-optimization for resource allocation decision-making in fixed-input health systems.

    PubMed

    Bastian, Nathaniel D; Ekin, Tahir; Kang, Hyojung; Griffin, Paul M; Fulton, Lawrence V; Grannan, Benjamin C

    2017-06-01

    The management of hospitals within fixed-input health systems such as the U.S. Military Health System (MHS) can be challenging due to the large number of hospitals, as well as the uncertainty in input resources and achievable outputs. This paper introduces a stochastic multi-objective auto-optimization model (SMAOM) for resource allocation decision-making in fixed-input health systems. The model can automatically identify where to re-allocate system input resources at the hospital level in order to optimize overall system performance, while considering uncertainty in the model parameters. The model is applied to 128 hospitals in the three services (Air Force, Army, and Navy) in the MHS using hospital-level data from 2009 - 2013. The results are compared to the traditional input-oriented variable returns-to-scale Data Envelopment Analysis (DEA) model. The application of SMAOM to the MHS increases the expected system-wide technical efficiency by 18 % over the DEA model while also accounting for uncertainty of health system inputs and outputs. The developed method is useful for decision-makers in the Defense Health Agency (DHA), who have a strategic level objective of integrating clinical and business processes through better sharing of resources across the MHS and through system-wide standardization across the services. It is also less sensitive to data outliers or sampling errors than traditional DEA methods.

  17. Hydrological model parameter dimensionality is a weak measure of prediction uncertainty

    NASA Astrophysics Data System (ADS)

    Pande, S.; Arkesteijn, L.; Savenije, H.; Bastidas, L. A.

    2015-04-01

    This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is analyzed in a step by step manner. This is done measuring differences between simulations of a model under different realizations of input forcings. Algorithms are then suggested to estimate model complexity. Model complexities of the two model structures, SAC-SMA (Sacramento Soil Moisture Accounting) and its simplified version SIXPAR (Six Parameter Model), are computed on resampled input data sets from basins that span across the continental US. The model complexities for SIXPAR are estimated for various parameter ranges. It is shown that complexity of SIXPAR increases with lower storage capacity and/or higher recession coefficients. Thus it is argued that a conceptually simple model structure, such as SIXPAR, can be more complex than an intuitively more complex model structure, such as SAC-SMA for certain parameter ranges. We therefore contend that magnitudes of feasible model parameters influence the complexity of the model selection problem just as parameter dimensionality (number of parameters) does and that parameter dimensionality is an incomplete indicator of stability of hydrological model selection and prediction problems.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vrugt, Jasper A; Robinson, Bruce A; Ter Braak, Cajo J F

    In recent years, a strong debate has emerged in the hydrologic literature regarding what constitutes an appropriate framework for uncertainty estimation. Particularly, there is strong disagreement whether an uncertainty framework should have its roots within a proper statistical (Bayesian) context, or whether such a framework should be based on a different philosophy and implement informal measures and weaker inference to summarize parameter and predictive distributions. In this paper, we compare a formal Bayesian approach using Markov Chain Monte Carlo (MCMC) with generalized likelihood uncertainty estimation (GLUE) for assessing uncertainty in conceptual watershed modeling. Our formal Bayesian approach is implemented usingmore » the recently developed differential evolution adaptive metropolis (DREAM) MCMC scheme with a likelihood function that explicitly considers model structural, input and parameter uncertainty. Our results demonstrate that DREAM and GLUE can generate very similar estimates of total streamflow uncertainty. This suggests that formal and informal Bayesian approaches have more common ground than the hydrologic literature and ongoing debate might suggest. The main advantage of formal approaches is, however, that they attempt to disentangle the effect of forcing, parameter and model structural error on total predictive uncertainty. This is key to improving hydrologic theory and to better understand and predict the flow of water through catchments.« less

  19. Uncertainties in the Antarctic Ice Sheet Contribution to Sea Level Rise: Exploration of Model Response to Errors in Climate Forcing, Boundary Conditions, and Internal Parameters

    NASA Astrophysics Data System (ADS)

    Schlegel, N.; Seroussi, H. L.; Boening, C.; Larour, E. Y.; Limonadi, D.; Schodlok, M.; Watkins, M. M.

    2017-12-01

    The Jet Propulsion Laboratory-University of California at Irvine Ice Sheet System Model (ISSM) is a thermo-mechanical 2D/3D parallelized finite element software used to physically model the continental-scale flow of ice at high resolutions. Embedded into ISSM are uncertainty quantification (UQ) tools, based on the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA) software. ISSM-DAKOTA offers various UQ methods for the investigation of how errors in model input impact uncertainty in simulation results. We utilize these tools to regionally sample model input and key parameters, based on specified bounds of uncertainty, and run a suite of continental-scale 100-year ISSM forward simulations of the Antarctic Ice Sheet. Resulting diagnostics (e.g., spread in local mass flux and regional mass balance) inform our conclusion about which parameters and/or forcing has the greatest impact on century-scale model simulations of ice sheet evolution. The results allow us to prioritize the key datasets and measurements that are critical for the minimization of ice sheet model uncertainty. Overall, we find that Antartica's total sea level contribution is strongly affected by grounding line retreat, which is driven by the magnitude of ice shelf basal melt rates and by errors in bedrock topography. In addition, results suggest that after 100 years of simulation, Thwaites glacier is the most significant source of model uncertainty, and its drainage basin has the largest potential for future sea level contribution. This work is performed at and supported by the California Institute of Technology's Jet Propulsion Laboratory. Supercomputing time is also supported through a contract with the National Aeronautics and Space Administration's Cryosphere program.

  20. Fukushima Daiichi unit 1 uncertainty analysis--Preliminary selection of uncertain parameters and analysis methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardoni, Jeffrey N.; Kalinich, Donald A.

    2014-02-01

    Sandia National Laboratories (SNL) plans to conduct uncertainty analyses (UA) on the Fukushima Daiichi unit (1F1) plant with the MELCOR code. The model to be used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). However, that study only examined a handful of various model inputs and boundary conditions, and the predictions yielded only fair agreement with plant data and current release estimates. The goal of this uncertainty study is to perform a focused evaluation of uncertainty in core melt progression behavior and its effect on keymore » figures-of-merit (e.g., hydrogen production, vessel lower head failure, etc.). In preparation for the SNL Fukushima UA work, a scoping study has been completed to identify important core melt progression parameters for the uncertainty analysis. The study also lays out a preliminary UA methodology.« less

  1. `spup' - An R Package for Analysis of Spatial Uncertainty Propagation and Application to Trace Gas Emission Simulations

    NASA Astrophysics Data System (ADS)

    Sawicka, K.; Breuer, L.; Houska, T.; Santabarbara Ruiz, I.; Heuvelink, G. B. M.

    2016-12-01

    Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Advances in uncertainty propagation analysis and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the `spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo techniques, as well as several uncertainty visualization functions. Here we will demonstrate that the 'spup' package is an effective and easy-to-use tool to be applied even in a very complex study case, and that it can be used in multi-disciplinary research and model-based decision support. As an example, we use the ecological LandscapeDNDC model to analyse propagation of uncertainties associated with spatial variability of the model driving forces such as rainfall, nitrogen deposition and fertilizer inputs. The uncertainty propagation is analysed for the prediction of emissions of N2O and CO2 for a German low mountainous, agriculturally developed catchment. The study tests the effect of spatial correlations on spatially aggregated model outputs, and could serve as an advice for developing best management practices and model improvement strategies.

  2. Comparison of Homogeneous and Heterogeneous CFD Fuel Models for Phase I of the IAEA CRP on HTR Uncertainties Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Su-Jong Yoon

    2014-04-01

    Computational Fluid Dynamics (CFD) evaluation of homogeneous and heterogeneous fuel models was performed as part of the Phase I calculations of the International Atomic Energy Agency (IAEA) Coordinate Research Program (CRP) on High Temperature Reactor (HTR) Uncertainties in Modeling (UAM). This study was focused on the nominal localized stand-alone fuel thermal response, as defined in Ex. I-3 and I-4 of the HTR UAM. The aim of the stand-alone thermal unit-cell simulation is to isolate the effect of material and boundary input uncertainties on a very simplified problem, before propagation of these uncertainties are performed in subsequent coupled neutronics/thermal fluids phasesmore » on the benchmark. In many of the previous studies for high temperature gas cooled reactors, the volume-averaged homogeneous mixture model of a single fuel compact has been applied. In the homogeneous model, the Tristructural Isotropic (TRISO) fuel particles in the fuel compact were not modeled directly and an effective thermal conductivity was employed for the thermo-physical properties of the fuel compact. On the contrary, in the heterogeneous model, the uranium carbide (UCO), inner and outer pyrolytic carbon (IPyC/OPyC) and silicon carbide (SiC) layers of the TRISO fuel particles are explicitly modeled. The fuel compact is modeled as a heterogeneous mixture of TRISO fuel kernels embedded in H-451 matrix graphite. In this study, a steady-state and transient CFD simulations were performed with both homogeneous and heterogeneous models to compare the thermal characteristics. The nominal values of the input parameters are used for this CFD analysis. In a future study, the effects of input uncertainties in the material properties and boundary parameters will be investigated and reported.« less

  3. Uncertainty Analysis of A Flood Risk Mapping Procedure Applied In Urban Areas

    NASA Astrophysics Data System (ADS)

    Krause, J.; Uhrich, S.; Bormann, H.; Diekkrüger, B.

    In the framework of IRMA-Sponge program the presented study was part of the joint research project FRHYMAP (flood risk and hydrological mapping). A simple con- ceptual flooding model (FLOODMAP) has been developed to simulate flooded areas besides rivers within cities. FLOODMAP requires a minimum of input data (digital el- evation model (DEM), river line, water level plain) and parameters and calculates the flood extent as well as the spatial distribution of flood depths. of course the simulated model results are affected by errors and uncertainties. Possible sources of uncertain- ties are the model structure, model parameters and input data. Thus after the model validation (comparison of simulated water to observed extent, taken from airborne pictures) the uncertainty of the essential input data set (digital elevation model) was analysed. Monte Carlo simulations were performed to assess the effect of uncertain- ties concerning the statistics of DEM quality and to derive flooding probabilities from the set of simulations. The questions concerning a minimum resolution of a DEM re- quired for flood simulation and concerning the best aggregation procedure of a given DEM was answered by comparing the results obtained using all available standard GIS aggregation procedures. Seven different aggregation procedures were applied to high resolution DEMs (1-2m) in three cities (Bonn, Cologne, Luxembourg). Basing on this analysis the effect of 'uncertain' DEM data was estimated and compared with other sources of uncertainties. Especially socio-economic information and monetary transfer functions required for a damage risk analysis show a high uncertainty. There- fore this study helps to analyse the weak points of the flood risk and damage risk assessment procedure.

  4. Influences of system uncertainties on the numerical transfer path analysis of engine systems

    NASA Astrophysics Data System (ADS)

    Acri, A.; Nijman, E.; Acri, A.; Offner, G.

    2017-10-01

    Practical mechanical systems operate with some degree of uncertainty. In numerical models uncertainties can result from poorly known or variable parameters, from geometrical approximation, from discretization or numerical errors, from uncertain inputs or from rapidly changing forcing that can be best described in a stochastic framework. Recently, random matrix theory was introduced to take parameter uncertainties into account in numerical modeling problems. In particular in this paper, Wishart random matrix theory is applied on a multi-body dynamic system to generate random variations of the properties of system components. Multi-body dynamics is a powerful numerical tool largely implemented during the design of new engines. In this paper the influence of model parameter variability on the results obtained from the multi-body simulation of engine dynamics is investigated. The aim is to define a methodology to properly assess and rank system sources when dealing with uncertainties. Particular attention is paid to the influence of these uncertainties on the analysis and the assessment of the different engine vibration sources. Examples of the effects of different levels of uncertainties are illustrated by means of examples using a representative numerical powertrain model. A numerical transfer path analysis, based on system dynamic substructuring, is used to derive and assess the internal engine vibration sources. The results obtained from this analysis are used to derive correlations between parameter uncertainties and statistical distribution of results. The derived statistical information can be used to advance the knowledge of the multi-body analysis and the assessment of system sources when uncertainties in model parameters are considered.

  5. Are Subject-Specific Musculoskeletal Models Robust to the Uncertainties in Parameter Identification?

    PubMed Central

    Valente, Giordano; Pitto, Lorenzo; Testi, Debora; Seth, Ajay; Delp, Scott L.; Stagni, Rita; Viceconti, Marco; Taddei, Fulvia

    2014-01-01

    Subject-specific musculoskeletal modeling can be applied to study musculoskeletal disorders, allowing inclusion of personalized anatomy and properties. Independent of the tools used for model creation, there are unavoidable uncertainties associated with parameter identification, whose effect on model predictions is still not fully understood. The aim of the present study was to analyze the sensitivity of subject-specific model predictions (i.e., joint angles, joint moments, muscle and joint contact forces) during walking to the uncertainties in the identification of body landmark positions, maximum muscle tension and musculotendon geometry. To this aim, we created an MRI-based musculoskeletal model of the lower limbs, defined as a 7-segment, 10-degree-of-freedom articulated linkage, actuated by 84 musculotendon units. We then performed a Monte-Carlo probabilistic analysis perturbing model parameters according to their uncertainty, and solving a typical inverse dynamics and static optimization problem using 500 models that included the different sets of perturbed variable values. Model creation and gait simulations were performed by using freely available software that we developed to standardize the process of model creation, integrate with OpenSim and create probabilistic simulations of movement. The uncertainties in input variables had a moderate effect on model predictions, as muscle and joint contact forces showed maximum standard deviation of 0.3 times body-weight and maximum range of 2.1 times body-weight. In addition, the output variables significantly correlated with few input variables (up to 7 out of 312) across the gait cycle, including the geometry definition of larger muscles and the maximum muscle tension in limited gait portions. Although we found subject-specific models not markedly sensitive to parameter identification, researchers should be aware of the model precision in relation to the intended application. In fact, force predictions could be affected by an uncertainty in the same order of magnitude of its value, although this condition has low probability to occur. PMID:25390896

  6. Uncertainty in modeled upper ocean heat content change

    NASA Astrophysics Data System (ADS)

    Tokmakian, Robin; Challenor, Peter

    2014-02-01

    This paper examines the uncertainty in the change in the heat content in the ocean component of a general circulation model. We describe the design and implementation of our statistical methodology. Using an ensemble of model runs and an emulator, we produce an estimate of the full probability distribution function (PDF) for the change in upper ocean heat in an Atmosphere/Ocean General Circulation Model, the Community Climate System Model v. 3, across a multi-dimensional input space. We show how the emulator of the GCM's heat content change and hence, the PDF, can be validated and how implausible outcomes from the emulator can be identified when compared to observational estimates of the metric. In addition, the paper describes how the emulator outcomes and related uncertainty information might inform estimates of the same metric from a multi-model Coupled Model Intercomparison Project phase 3 ensemble. We illustrate how to (1) construct an ensemble based on experiment design methods, (2) construct and evaluate an emulator for a particular metric of a complex model, (3) validate the emulator using observational estimates and explore the input space with respect to implausible outcomes and (4) contribute to the understanding of uncertainties within a multi-model ensemble. Finally, we estimate the most likely value for heat content change and its uncertainty for the model, with respect to both observations and the uncertainty in the value for the input parameters.

  7. The method of belief scales as a means for dealing with uncertainty in tough regulatory decisions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilch, Martin M.

    Modeling and simulation is playing an increasing role in supporting tough regulatory decisions, which are typically characterized by variabilities and uncertainties in the scenarios, input conditions, failure criteria, model parameters, and even model form. Variability exists when there is a statistically significant database that is fully relevant to the application. Uncertainty, on the other hand, is characterized by some degree of ignorance. A simple algebraic problem was used to illustrate how various risk methodologies address variability and uncertainty in a regulatory context. These traditional risk methodologies include probabilistic methods (including frequensic and Bayesian perspectives) and second-order methods where variabilities andmore » uncertainties are treated separately. Representing uncertainties with (subjective) probability distributions and using probabilistic methods to propagate subjective distributions can lead to results that are not logically consistent with available knowledge and that may not be conservative. The Method of Belief Scales (MBS) is developed as a means to logically aggregate uncertain input information and to propagate that information through the model to a set of results that are scrutable, easily interpretable by the nonexpert, and logically consistent with the available input information. The MBS, particularly in conjunction with sensitivity analyses, has the potential to be more computationally efficient than other risk methodologies. The regulatory language must be tailored to the specific risk methodology if ambiguity and conflict are to be avoided.« less

  8. Decision & Management Tools for DNAPL Sites: Optimization of Chlorinated Solvent Source and Plume Remediation Considering Uncertainty

    DTIC Science & Technology

    2010-09-01

    differentiated between source codes and input/output files. The text makes references to a REMChlor-GoldSim model. The text also refers to the REMChlor...To the extent possible, the instructions should be accurate and precise. The documentation should differentiate between describing what is actually...Windows XP operating system Model Input Paran1eters. · n1e input parameters were identical to those utilized and reported by CDM (See Table .I .from

  9. Framework for Uncertainty Assessment - Hanford Site-Wide Groundwater Flow and Transport Modeling

    NASA Astrophysics Data System (ADS)

    Bergeron, M. P.; Cole, C. R.; Murray, C. J.; Thorne, P. D.; Wurstner, S. K.

    2002-05-01

    Pacific Northwest National Laboratory is in the process of development and implementation of an uncertainty estimation methodology for use in future site assessments that addresses parameter uncertainty as well as uncertainties related to the groundwater conceptual model. The long-term goals of the effort are development and implementation of an uncertainty estimation methodology for use in future assessments and analyses being made with the Hanford site-wide groundwater model. The basic approach in the framework developed for uncertainty assessment consists of: 1) Alternate conceptual model (ACM) identification to identify and document the major features and assumptions of each conceptual model. The process must also include a periodic review of the existing and proposed new conceptual models as data or understanding become available. 2) ACM development of each identified conceptual model through inverse modeling with historical site data. 3) ACM evaluation to identify which of conceptual models are plausible and should be included in any subsequent uncertainty assessments. 4) ACM uncertainty assessments will only be carried out for those ACMs determined to be plausible through comparison with historical observations and model structure identification measures. The parameter uncertainty assessment process generally involves: a) Model Complexity Optimization - to identify the important or relevant parameters for the uncertainty analysis; b) Characterization of Parameter Uncertainty - to develop the pdfs for the important uncertain parameters including identification of any correlations among parameters; c) Propagation of Uncertainty - to propagate parameter uncertainties (e.g., by first order second moment methods if applicable or by a Monte Carlo approach) through the model to determine the uncertainty in the model predictions of interest. 5)Estimation of combined ACM and scenario uncertainty by a double sum with each component of the inner sum (an individual CCDF) representing parameter uncertainty associated with a particular scenario and ACM and the outer sum enumerating the various plausible ACM and scenario combinations in order to represent the combined estimate of uncertainty (a family of CCDFs). A final important part of the framework includes identification, enumeration, and documentation of all the assumptions, which include those made during conceptual model development, required by the mathematical model, required by the numerical model, made during the spatial and temporal descretization process, needed to assign the statistical model and associated parameters that describe the uncertainty in the relevant input parameters, and finally those assumptions required by the propagation method. Pacific Northwest National Laboratory is operated for the U.S. Department of Energy under Contract DE-AC06-76RL01830.

  10. Fully probabilistic control for stochastic nonlinear control systems with input dependent noise.

    PubMed

    Herzallah, Randa

    2015-03-01

    Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    NASA Technical Reports Server (NTRS)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  12. A methodology for formulating a minimal uncertainty model for robust control system design and analysis

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert

    1989-01-01

    In the design and analysis of robust control systems for uncertain plants, the technique of formulating what is termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents the transfer function matrix M(s) of the nominal system, and delta represents an uncertainty matrix acting on M(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unstructured uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, and for real parameter variations the diagonal elements are real. As stated in the literature, this structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the literature addresses methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty. Since have a delta matrix of minimum order would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta model would be useful. A generalized method of obtaining a minimal M-delta structure for systems with real parameter variations is given.

  13. Improving Forecasts Through Realistic Uncertainty Estimates: A Novel Data Driven Method for Model Uncertainty Quantification in Data Assimilation

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.

    2016-12-01

    Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.

  14. Parametric Robust Control and System Identification: Unified Approach

    NASA Technical Reports Server (NTRS)

    Keel, L. H.

    1996-01-01

    During the period of this support, a new control system design and analysis method has been studied. This approach deals with control systems containing uncertainties that are represented in terms of its transfer function parameters. Such a representation of the control system is common and many physical parameter variations fall into this type of uncertainty. Techniques developed here are capable of providing nonconservative analysis of such control systems with parameter variations. We have also developed techniques to deal with control systems when their state space representations are given rather than transfer functions. In this case, the plant parameters will appear as entries of state space matrices. Finally, a system modeling technique to construct such systems from the raw input - output frequency domain data has been developed.

  15. Recovering stellar population parameters via two full-spectrum fitting algorithms in the absence of model uncertainties

    NASA Astrophysics Data System (ADS)

    Ge, Junqiang; Yan, Renbin; Cappellari, Michele; Mao, Shude; Li, Hongyu; Lu, Youjun

    2018-05-01

    Using mock spectra based on Vazdekis/MILES library fitted within the wavelength region 3600-7350Å, we analyze the bias and scatter on the resulting physical parameters induced by the choice of fitting algorithms and observational uncertainties, but avoid effects of those model uncertainties. We consider two full-spectrum fitting codes: pPXF and STARLIGHT, in fitting for stellar population age, metallicity, mass-to-light ratio, and dust extinction. With pPXF we find that both the bias μ in the population parameters and the scatter σ in the recovered logarithmic values follows the expected trend μ ∝ σ ∝ 1/(S/N). The bias increases for younger ages and systematically makes recovered ages older, M*/Lr larger and metallicities lower than the true values. For reference, at S/N=30, and for the worst case (t = 108yr), the bias is 0.06 dex in M/Lr, 0.03 dex in both age and [M/H]. There is no significant dependence on either E(B-V) or the shape of the error spectrum. Moreover, the results are consistent for both our 1-SSP and 2-SSP tests. With the STARLIGHT algorithm, we find trends similar to pPXF, when the input E(B-V)<0.2 mag. However, with larger input E(B-V), the biases of the output parameter do not converge to zero even at the highest S/N and are strongly affected by the shape of the error spectra. This effect is particularly dramatic for youngest age (t = 108yr), for which all population parameters can be strongly different from the input values, with significantly underestimated dust extinction and [M/H], and larger ages and M*/Lr. Results degrade when moving from our 1-SSP to the 2-SSP tests. The STARLIGHT convergence to the true values can be improved by increasing Markov Chains and annealing loops to the "slow mode". For the same input spectrum, pPXF is about two order of magnitudes faster than STARLIGHT's "default mode" and about three order of magnitude faster than STARLIGHT's "slow mode".

  16. Prediction and assimilation of surf-zone processes using a Bayesian network: Part I: Forward models

    USGS Publications Warehouse

    Plant, Nathaniel G.; Holland, K. Todd

    2011-01-01

    Prediction of coastal processes, including waves, currents, and sediment transport, can be obtained from a variety of detailed geophysical-process models with many simulations showing significant skill. This capability supports a wide range of research and applied efforts that can benefit from accurate numerical predictions. However, the predictions are only as accurate as the data used to drive the models and, given the large temporal and spatial variability of the surf zone, inaccuracies in data are unavoidable such that useful predictions require corresponding estimates of uncertainty. We demonstrate how a Bayesian-network model can be used to provide accurate predictions of wave-height evolution in the surf zone given very sparse and/or inaccurate boundary-condition data. The approach is based on a formal treatment of a data-assimilation problem that takes advantage of significant reduction of the dimensionality of the model system. We demonstrate that predictions of a detailed geophysical model of the wave evolution are reproduced accurately using a Bayesian approach. In this surf-zone application, forward prediction skill was 83%, and uncertainties in the model inputs were accurately transferred to uncertainty in output variables. We also demonstrate that if modeling uncertainties were not conveyed to the Bayesian network (i.e., perfect data or model were assumed), then overly optimistic prediction uncertainties were computed. More consistent predictions and uncertainties were obtained by including model-parameter errors as a source of input uncertainty. Improved predictions (skill of 90%) were achieved because the Bayesian network simultaneously estimated optimal parameters while predicting wave heights.

  17. A method for approximating acoustic-field-amplitude uncertainty caused by environmental uncertainties.

    PubMed

    James, Kevin R; Dowling, David R

    2008-09-01

    In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.

  18. Trajectory Dispersed Vehicle Process for Space Launch System

    NASA Technical Reports Server (NTRS)

    Statham, Tamara; Thompson, Seth

    2017-01-01

    The Space Launch System (SLS) vehicle is part of NASA's deep space exploration plans that includes manned missions to Mars. Manufacturing uncertainties in design parameters are key considerations throughout SLS development as they have significant effects on focus parameters such as lift-off-thrust-to-weight, vehicle payload, maximum dynamic pressure, and compression loads. This presentation discusses how the SLS program captures these uncertainties by utilizing a 3 degree of freedom (DOF) process called Trajectory Dispersed (TD) analysis. This analysis biases nominal trajectories to identify extremes in the design parameters for various potential SLS configurations and missions. This process utilizes a Design of Experiments (DOE) and response surface methodologies (RSM) to statistically sample uncertainties, and develop resulting vehicles using a Maximum Likelihood Estimate (MLE) process for targeting uncertainties bias. These vehicles represent various missions and configurations which are used as key inputs into a variety of analyses in the SLS design process, including 6 DOF dispersions, separation clearances, and engine out failure studies.

  19. A DDDAS Framework for Volcanic Ash Propagation and Hazard Analysis

    DTIC Science & Technology

    2012-01-01

    probability distribution for the input variables (for example, Hermite polynomials for normally distributed parameters, or Legendre for uniformly...parameters and windfields will drive our simulations. We will use uncertainty quantification methodology – polynomial chaos quadrature in combination...quantification methodology ? polynomial chaos quadrature in combination with data integration to complete the DDDAS loop. 15. SUBJECT TERMS 16. SECURITY

  20. Quantifying the importance of spatial resolution and other factors through global sensitivity analysis of a flood inundation model

    NASA Astrophysics Data System (ADS)

    Thomas Steven Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2016-11-01

    Where high-resolution topographic data are available, modelers are faced with the decision of whether it is better to spend computational resource on resolving topography at finer resolutions or on running more simulations to account for various uncertain input factors (e.g., model parameters). In this paper we apply global sensitivity analysis to explore how influential the choice of spatial resolution is when compared to uncertainties in the Manning's friction coefficient parameters, the inflow hydrograph, and those stemming from the coarsening of topographic data used to produce Digital Elevation Models (DEMs). We apply the hydraulic model LISFLOOD-FP to produce several temporally and spatially variable model outputs that represent different aspects of flood inundation processes, including flood extent, water depth, and time of inundation. We find that the most influential input factor for flood extent predictions changes during the flood event, starting with the inflow hydrograph during the rising limb before switching to the channel friction parameter during peak flood inundation, and finally to the floodplain friction parameter during the drying phase of the flood event. Spatial resolution and uncertainty introduced by resampling topographic data to coarser resolutions are much more important for water depth predictions, which are also sensitive to different input factors spatially and temporally. Our findings indicate that the sensitivity of LISFLOOD-FP predictions is more complex than previously thought. Consequently, the input factors that modelers should prioritize will differ depending on the model output assessed, and the location and time of when and where this output is most relevant.

  1. Application of Non-Deterministic Methods to Assess Modeling Uncertainties for Reinforced Carbon-Carbon Debris Impacts

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.; Fasanella, Edwin L.; Melis, Matthew; Carney, Kelly; Gabrys, Jonathan

    2004-01-01

    The Space Shuttle Columbia Accident Investigation Board (CAIB) made several recommendations for improving the NASA Space Shuttle Program. An extensive experimental and analytical program has been developed to address two recommendations related to structural impact analysis. The objective of the present work is to demonstrate the application of probabilistic analysis to assess the effect of uncertainties on debris impacts on Space Shuttle Reinforced Carbon-Carbon (RCC) panels. The probabilistic analysis is used to identify the material modeling parameters controlling the uncertainty. A comparison of the finite element results with limited experimental data provided confidence that the simulations were adequately representing the global response of the material. Five input parameters were identified as significantly controlling the response.

  2. Evaluation of calibration efficacy under different levels of uncertainty

    DOE PAGES

    Heo, Yeonsook; Graziano, Diane J.; Guzowski, Leah; ...

    2014-06-10

    This study examines how calibration performs under different levels of uncertainty in model input data. It specifically assesses the efficacy of Bayesian calibration to enhance the reliability of EnergyPlus model predictions. A Bayesian approach can be used to update uncertain values of parameters, given measured energy-use data, and to quantify the associated uncertainty.We assess the efficacy of Bayesian calibration under a controlled virtual-reality setup, which enables rigorous validation of the accuracy of calibration results in terms of both calibrated parameter values and model predictions. Case studies demonstrate the performance of Bayesian calibration of base models developed from audit data withmore » differing levels of detail in building design, usage, and operation.« less

  3. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  4. Calibrating the stress-time curve of a combined finite-discrete element method to a Split Hopkinson Pressure Bar experiment

    DOE PAGES

    Osthus, Dave; Godinez, Humberto C.; Rougier, Esteban; ...

    2018-05-01

    We presenmore » t a generic method for automatically calibrating a computer code to an experiment, with uncertainty, for a given “training” set of computer code runs. The calibration technique is general and probabilistic, meaning the calibration uncertainty is represented in the form of a probability distribution. We demonstrate the calibration method by calibrating a combined Finite-Discrete Element Method (FDEM) to a Split Hopkinson Pressure Bar (SHPB) experiment with a granite sample. The probabilistic calibration method combines runs of a FDEM computer simulation for a range of “training” settings and experimental uncertainty to develop a statistical emulator. The process allows for calibration of input parameters and produces output quantities with uncertainty estimates for settings where simulation results are desired. Input calibration and FDEM fitted results are presented. We find that the maximum shear strength σ t max and to a lesser extent maximum tensile strength σ n max govern the behavior of the stress-time curve before and around the peak, while the specific energy in Mode II (shear) E t largely governs the post-peak behavior of the stress-time curve. Good agreement is found between the calibrated FDEM and the SHPB experiment. Interestingly, we find the SHPB experiment to be rather uninformative for calibrating the softening-curve shape parameters (a, b, and c). This work stands as a successful demonstration of how a general probabilistic calibration framework can automatically calibrate FDEM parameters to an experiment.« less

  5. Global Nitrous Oxide Emissions from Agricultural Soils: Magnitude and Uncertainties Associated with Input Data and Model Parameters

    NASA Astrophysics Data System (ADS)

    Xu, R.; Tian, H.; Pan, S.; Yang, J.; Lu, C.; Zhang, B.

    2016-12-01

    Human activities have caused significant perturbations of the nitrogen (N) cycle, resulting in about 21% increase of atmospheric N2O concentration since the pre-industrial era. This large increase is mainly caused by intensive agricultural activities including the application of nitrogen fertilizer and the expansion of leguminous crops. Substantial efforts have been made to quantify the global and regional N2O emission from agricultural soils in the last several decades using a wide variety of approaches, such as ground-based observation, atmospheric inversion, and process-based model. However, large uncertainties exist in those estimates as well as methods themselves. In this study, we used a coupled biogeochemical model (DLEM) to estimate magnitude, spatial, and temporal patterns of N2O emissions from global croplands in the past five decades (1961-2012). To estimate uncertainties associated with input data and model parameters, we have implemented a number of simulation experiments with DLEM, accounting for key parameter values that affect calculation of N2O fluxes (i.e., maximum nitrification and denitrification rates, N fixation rate, and the adsorption coefficient for soil ammonium and nitrate), different sets of input data including climate, land management practices (i.e., nitrogen fertilizer types, application rates and timings, with/without irrigation), N deposition, and land use and land cover change. This work provides a robust estimate of global N2O emissions from agricultural soils as well as identifies key gaps and limitations in the existing model and data that need to be investigated in the future.

  6. Evaluation and uncertainty analysis of regional-scale CLM4.5 net carbon flux estimates

    NASA Astrophysics Data System (ADS)

    Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry

    2018-01-01

    Modeling net ecosystem exchange (NEE) at the regional scale with land surface models (LSMs) is relevant for the estimation of regional carbon balances, but studies on it are very limited. Furthermore, it is essential to better understand and quantify the uncertainty of LSMs in order to improve them. An important key variable in this respect is the prognostic leaf area index (LAI), which is very sensitive to forcing data and strongly affects the modeled NEE. We applied the Community Land Model (CLM4.5-BGC) to the Rur catchment in western Germany and compared estimated and default ecological key parameters for modeling carbon fluxes and LAI. The parameter estimates were previously estimated with the Markov chain Monte Carlo (MCMC) approach DREAM(zs) for four of the most widespread plant functional types in the catchment. It was found that the catchment-scale annual NEE was strongly positive with default parameter values but negative (and closer to observations) with the estimated values. Thus, the estimation of CLM parameters with local NEE observations can be highly relevant when determining regional carbon balances. To obtain a more comprehensive picture of model uncertainty, CLM ensembles were set up with perturbed meteorological input and uncertain initial states in addition to uncertain parameters. C3 grass and C3 crops were particularly sensitive to the perturbed meteorological input, which resulted in a strong increase in the standard deviation of the annual NEE sum (σ NEE) for the different ensemble members from ˜ 2 to 3 g C m-2 yr-1 (with uncertain parameters) to ˜ 45 g C m-2 yr-1 (C3 grass) and ˜ 75 g C m-2 yr-1 (C3 crops) with perturbed forcings. This increase in uncertainty is related to the impact of the meteorological forcings on leaf onset and senescence, and enhanced/reduced drought stress related to perturbation of precipitation. The NEE uncertainty for the forest plant functional type (PFT) was considerably lower (σ NEE ˜ 4.0-13.5 g C m-2 yr-1 with perturbed parameters, meteorological forcings and initial states). We conclude that LAI and NEE uncertainty with CLM is clearly underestimated if uncertain meteorological forcings and initial states are not taken into account.

  7. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  8. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  9. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    NASA Astrophysics Data System (ADS)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

    2018-03-01

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  10. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE PAGES

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; ...

    2018-02-09

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  11. Efficient Screening of Climate Model Sensitivity to a Large Number of Perturbed Input Parameters [plus supporting information

    DOE PAGES

    Covey, Curt; Lucas, Donald D.; Tannahill, John; ...

    2013-07-01

    Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less

  12. Investigation of Multi-Input Multi-Output Robust Control Methods to Handle Parametric Uncertainties in Autopilot Design.

    PubMed

    Kasnakoğlu, Coşku

    2016-01-01

    Some level of uncertainty is unavoidable in acquiring the mass, geometry parameters and stability derivatives of an aerial vehicle. In certain instances tiny perturbations of these could potentially cause considerable variations in flight characteristics. This research considers the impact of varying these parameters altogether. This is a generalization of examining the effects of particular parameters on selected modes present in existing literature. Conventional autopilot designs commonly assume that each flight channel is independent and develop single-input single-output (SISO) controllers for every one, that are utilized in parallel for actual flight. It is demonstrated that an attitude controller built like this can function flawlessly on separate nominal cases, but can become unstable with a perturbation no more than 2%. Two robust multi-input multi-output (MIMO) design strategies, specifically loop-shaping and μ-synthesis are outlined as potential substitutes and are observed to handle large parametric changes of 30% while preserving decent performance. Duplicating the loop-shaping procedure for the outer loop, a complete flight control system is formed. It is confirmed through software-in-the-loop (SIL) verifications utilizing blade element theory (BET) that the autopilot is capable of navigation and landing exposed to high parametric variations and powerful winds.

  13. Investigation of Multi-Input Multi-Output Robust Control Methods to Handle Parametric Uncertainties in Autopilot Design

    PubMed Central

    Kasnakoğlu, Coşku

    2016-01-01

    Some level of uncertainty is unavoidable in acquiring the mass, geometry parameters and stability derivatives of an aerial vehicle. In certain instances tiny perturbations of these could potentially cause considerable variations in flight characteristics. This research considers the impact of varying these parameters altogether. This is a generalization of examining the effects of particular parameters on selected modes present in existing literature. Conventional autopilot designs commonly assume that each flight channel is independent and develop single-input single-output (SISO) controllers for every one, that are utilized in parallel for actual flight. It is demonstrated that an attitude controller built like this can function flawlessly on separate nominal cases, but can become unstable with a perturbation no more than 2%. Two robust multi-input multi-output (MIMO) design strategies, specifically loop-shaping and μ-synthesis are outlined as potential substitutes and are observed to handle large parametric changes of 30% while preserving decent performance. Duplicating the loop-shaping procedure for the outer loop, a complete flight control system is formed. It is confirmed through software-in-the-loop (SIL) verifications utilizing blade element theory (BET) that the autopilot is capable of navigation and landing exposed to high parametric variations and powerful winds. PMID:27783706

  14. Petroleum Release Assessment and Impacts of Weather Extremes

    EPA Science Inventory

    Contaminated ground water and vapor intrusion are two major exposure pathways of concern at petroleum release sites. EPA has recently developed a model for petroleum vapor intrusion, called PVIScreen, which incorporates variability and uncertainty in input parameters. This ap...

  15. A First Look at Electric Motor Noise For Future Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Huff, Dennis L.; Henderson, Brenda S.; Envia, Edmane

    2016-01-01

    Motor tone predictions using a vibration analysis and input from design parameters for high power density motors show that the noise can be significantly higher or lower than the empirical correlations and exceeds the stated uncertainty.

  16. Error analysis in stereo vision for location measurement of 3D point

    NASA Astrophysics Data System (ADS)

    Li, Yunting; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.

  17. Emulation for probabilistic weather forecasting

    NASA Astrophysics Data System (ADS)

    Cornford, Dan; Barillec, Remi

    2010-05-01

    Numerical weather prediction models are typically very expensive to run due to their complexity and resolution. Characterising the sensitivity of the model to its initial condition and/or to its parameters requires numerous runs of the model, which is impractical for all but the simplest models. To produce probabilistic forecasts requires knowledge of the distribution of the model outputs, given the distribution over the inputs, where the inputs include the initial conditions, boundary conditions and model parameters. Such uncertainty analysis for complex weather prediction models seems a long way off, given current computing power, with ensembles providing only a partial answer. One possible way forward that we develop in this work is the use of statistical emulators. Emulators provide an efficient statistical approximation to the model (or simulator) while quantifying the uncertainty introduced. In the emulator framework, a Gaussian process is fitted to the simulator response as a function of the simulator inputs using some training data. The emulator is essentially an interpolator of the simulator output and the response in unobserved areas is dictated by the choice of covariance structure and parameters in the Gaussian process. Suitable parameters are inferred from the data in a maximum likelihood, or Bayesian framework. Once trained, the emulator allows operations such as sensitivity analysis or uncertainty analysis to be performed at a much lower computational cost. The efficiency of emulators can be further improved by exploiting the redundancy in the simulator output through appropriate dimension reduction techniques. We demonstrate this using both Principal Component Analysis on the model output and a new reduced-rank emulator in which an optimal linear projection operator is estimated jointly with other parameters, in the context of simple low order models, such as the Lorenz 40D system. We present the application of emulators to probabilistic weather forecasting, where the construction of the emulator training set replaces the traditional ensemble model runs. Thus the actual forecast distributions are computed using the emulator conditioned on the ‘ensemble runs' which are chosen to explore the plausible input space using relatively crude experimental design methods. One benefit here is that the ensemble does not need to be a sample from the true distribution of the input space, rather it should cover that input space in some sense. The probabilistic forecasts are computed using Monte Carlo methods sampling from the input distribution and using the emulator to produce the output distribution. Finally we discuss the limitations of this approach and briefly mention how we might use similar methods to learn the model error within a framework that incorporates a data assimilation like aspect, using emulators and learning complex model error representations. We suggest future directions for research in the area that will be necessary to apply the method to more realistic numerical weather prediction models.

  18. Stabilization and synchronization for a mechanical system via adaptive sliding mode control.

    PubMed

    Song, Zhankui; Sun, Kaibiao; Ling, Shuai

    2017-05-01

    In this paper, we investigate the synchronization problem of chaotic centrifugal flywheel governor with parameters uncertainty and lumped disturbances. A slave centrifugal flywheel governor system is considered as an underactuated following-system which a control input is designed to follow a master centrifugal flywheel governor system. To tackle lumped disturbances and uncertainty parameters, a novel synchronization control law is developed by employing sliding mode control strategy and Nussbaum gain technique. Adaptation updating algorithms are derived in the sense of Lyapunov stability analysis such that the lumped disturbances can be suppressed and the adverse effect caused by uncertainty parameters can be compensated. In addition, the synchronization tracking-errors are proven to converge to a small neighborhood of the origin. Finally, simulation results demonstrate the effectiveness of the proposed control scheme. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Uncertainty in Ecohydrological Modeling in an Arid Region Determined with Bayesian Methods

    PubMed Central

    Yang, Junjun; He, Zhibin; Du, Jun; Chen, Longfei; Zhu, Xi

    2016-01-01

    In arid regions, water resources are a key forcing factor in ecosystem circulation, and soil moisture is the critical link that constrains plant and animal life on the soil surface and underground. Simulation of soil moisture in arid ecosystems is inherently difficult due to high variability. We assessed the applicability of the process-oriented CoupModel for forecasting of soil water relations in arid regions. We used vertical soil moisture profiling for model calibration. We determined that model-structural uncertainty constituted the largest error; the model did not capture the extremes of low soil moisture in the desert-oasis ecotone (DOE), particularly below 40 cm soil depth. Our results showed that total uncertainty in soil moisture prediction was improved when input and output data, parameter value array, and structure errors were characterized explicitly. Bayesian analysis was applied with prior information to reduce uncertainty. The need to provide independent descriptions of uncertainty analysis (UA) in the input and output data was demonstrated. Application of soil moisture simulation in arid regions will be useful for dune-stabilization and revegetation efforts in the DOE. PMID:26963523

  20. A MULTISCALE FRAMEWORK FOR THE STOCHASTIC ASSIMILATION AND MODELING OF UNCERTAINTY ASSOCIATED NCF COMPOSITE MATERIALS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehrez, Loujaine; Ghanem, Roger; McAuliffe, Colin

    multiscale framework to construct stochastic macroscopic constitutive material models is proposed. A spectral projection approach, specifically polynomial chaos expansion, has been used to construct explicit functional relationships between the homogenized properties and input parameters from finer scales. A homogenization engine embedded in Multiscale Designer, software for composite materials, has been used for the upscaling process. The framework is demonstrated using non-crimp fabric composite materials by constructing probabilistic models of the homogenized properties of a non-crimp fabric laminate in terms of the input parameters together with the homogenized properties from finer scales.

  1. Uncertainty in a monthly water balance model using the generalized likelihood uncertainty estimation methodology

    NASA Astrophysics Data System (ADS)

    Rivera, Diego; Rivas, Yessica; Godoy, Alex

    2015-02-01

    Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s-1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.

  2. Case studies in Bayesian microbial risk assessments.

    PubMed

    Kennedy, Marc C; Clough, Helen E; Turner, Joanne

    2009-12-21

    The quantification of uncertainty and variability is a key component of quantitative risk analysis. Recent advances in Bayesian statistics make it ideal for integrating multiple sources of information, of different types and quality, and providing a realistic estimate of the combined uncertainty in the final risk estimates. We present two case studies related to foodborne microbial risks. In the first, we combine models to describe the sequence of events resulting in illness from consumption of milk contaminated with VTEC O157. We used Monte Carlo simulation to propagate uncertainty in some of the inputs to computer models describing the farm and pasteurisation process. Resulting simulated contamination levels were then assigned to consumption events from a dietary survey. Finally we accounted for uncertainty in the dose-response relationship and uncertainty due to limited incidence data to derive uncertainty about yearly incidences of illness in young children. Options for altering the risk were considered by running the model with different hypothetical policy-driven exposure scenarios. In the second case study we illustrate an efficient Bayesian sensitivity analysis for identifying the most important parameters of a complex computer code that simulated VTEC O157 prevalence within a managed dairy herd. This was carried out in 2 stages, first to screen out the unimportant inputs, then to perform a more detailed analysis on the remaining inputs. The method works by building a Bayesian statistical approximation to the computer code using a number of known code input/output pairs (training runs). We estimated that the expected total number of children aged 1.5-4.5 who become ill due to VTEC O157 in milk is 8.6 per year, with 95% uncertainty interval (0,11.5). The most extreme policy we considered was banning on-farm pasteurisation of milk, which reduced the estimate to 6.4 with 95% interval (0,11). In the second case study the effective number of inputs was reduced from 30 to 7 in the screening stage, and just 2 inputs were found to explain 82.8% of the output variance. A combined total of 500 runs of the computer code were used. These case studies illustrate the use of Bayesian statistics to perform detailed uncertainty and sensitivity analyses, integrating multiple information sources in a way that is both rigorous and efficient.

  3. Parameter estimation for groundwater models under uncertain irrigation data

    USGS Publications Warehouse

    Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  4. Parameter optimization of a hydrologic model in a snow-dominated basin using a modular Python framework

    NASA Astrophysics Data System (ADS)

    Volk, J. M.; Turner, M. A.; Huntington, J. L.; Gardner, M.; Tyler, S.; Sheneman, L.

    2016-12-01

    Many distributed models that simulate watershed hydrologic processes require a collection of multi-dimensional parameters as input, some of which need to be calibrated before the model can be applied. The Precipitation Runoff Modeling System (PRMS) is a physically-based and spatially distributed hydrologic model that contains a considerable number of parameters that often need to be calibrated. Modelers can also benefit from uncertainty analysis of these parameters. To meet these needs, we developed a modular framework in Python to conduct PRMS parameter optimization, uncertainty analysis, interactive visual inspection of parameters and outputs, and other common modeling tasks. Here we present results for multi-step calibration of sensitive parameters controlling solar radiation, potential evapo-transpiration, and streamflow in a PRMS model that we applied to the snow-dominated Dry Creek watershed in Idaho. We also demonstrate how our modular approach enables the user to use a variety of parameter optimization and uncertainty methods or easily define their own, such as Monte Carlo random sampling, uniform sampling, or even optimization methods such as the downhill simplex method or its commonly used, more robust counterpart, shuffled complex evolution.

  5. Dependence of calculated postshock thermodynamic variables on vibrational equilibrium and input uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Matthew Frederick; Owen, Kyle G.; Davidson, David F.

    The purpose of this article is to explore the dependence of calculated postshock thermodynamic properties in shock tube experiments upon the vibrational state of the test gas and upon the uncertainties inherent to calculation inputs. This paper first offers a comparison between state variables calculated according to a Rankine–Hugoniot–equation-based algorithm, known as FROSH, and those derived from shock tube experiments on vibrationally nonequilibrated gases. It is shown that incorrect vibrational relaxation assumptions could lead to errors in temperature as large as 8% for 25% oxygen/argon mixtures at 3500 K. Following this demonstration, this article employs the algorithm to show themore » importance of correct vibrational equilibration assumptions, noting, for instance, that errors in temperature of up to about 2% at 3500 K may be generated for 10% nitrogen/argon mixtures if vibrational relaxation is not treated properly. Lastly, this article presents an extensive uncertainty analysis, showing that postshock temperatures can be calculated with root-of-sum-of-square errors of better than ±1% given sufficiently accurate experimentally measured input parameters.« less

  6. Dependence of calculated postshock thermodynamic variables on vibrational equilibrium and input uncertainty

    DOE PAGES

    Campbell, Matthew Frederick; Owen, Kyle G.; Davidson, David F.; ...

    2017-01-30

    The purpose of this article is to explore the dependence of calculated postshock thermodynamic properties in shock tube experiments upon the vibrational state of the test gas and upon the uncertainties inherent to calculation inputs. This paper first offers a comparison between state variables calculated according to a Rankine–Hugoniot–equation-based algorithm, known as FROSH, and those derived from shock tube experiments on vibrationally nonequilibrated gases. It is shown that incorrect vibrational relaxation assumptions could lead to errors in temperature as large as 8% for 25% oxygen/argon mixtures at 3500 K. Following this demonstration, this article employs the algorithm to show themore » importance of correct vibrational equilibration assumptions, noting, for instance, that errors in temperature of up to about 2% at 3500 K may be generated for 10% nitrogen/argon mixtures if vibrational relaxation is not treated properly. Lastly, this article presents an extensive uncertainty analysis, showing that postshock temperatures can be calculated with root-of-sum-of-square errors of better than ±1% given sufficiently accurate experimentally measured input parameters.« less

  7. Evaluation of five dry particle deposition parameterizations for incorporation into atmospheric transport models

    NASA Astrophysics Data System (ADS)

    Khan, Tanvir R.; Perlinger, Judith A.

    2017-10-01

    Despite considerable effort to develop mechanistic dry particle deposition parameterizations for atmospheric transport models, current knowledge has been inadequate to propose quantitative measures of the relative performance of available parameterizations. In this study, we evaluated the performance of five dry particle deposition parameterizations developed by Zhang et al. (2001) (Z01), Petroff and Zhang (2010) (PZ10), Kouznetsov and Sofiev (2012) (KS12), Zhang and He (2014) (ZH14), and Zhang and Shao (2014) (ZS14), respectively. The evaluation was performed in three dimensions: model ability to reproduce observed deposition velocities, Vd (accuracy); the influence of imprecision in input parameter values on the modeled Vd (uncertainty); and identification of the most influential parameter(s) (sensitivity). The accuracy of the modeled Vd was evaluated using observations obtained from five land use categories (LUCs): grass, coniferous and deciduous forests, natural water, and ice/snow. To ascertain the uncertainty in modeled Vd, and quantify the influence of imprecision in key model input parameters, a Monte Carlo uncertainty analysis was performed. The Sobol' sensitivity analysis was conducted with the objective to determine the parameter ranking from the most to the least influential. Comparing the normalized mean bias factors (indicators of accuracy), we find that the ZH14 parameterization is the most accurate for all LUCs except for coniferous forest, for which it is second most accurate. From Monte Carlo simulations, the estimated mean normalized uncertainties in the modeled Vd obtained for seven particle sizes (ranging from 0.005 to 2.5 µm) for the five LUCs are 17, 12, 13, 16, and 27 % for the Z01, PZ10, KS12, ZH14, and ZS14 parameterizations, respectively. From the Sobol' sensitivity results, we suggest that the parameter rankings vary by particle size and LUC for a given parameterization. Overall, for dp = 0.001 to 1.0 µm, friction velocity was one of the three most influential parameters in all parameterizations. For giant particles (dp = 10 µm), relative humidity was the most influential parameter. Because it is the least complex of the five parameterizations, and it has the greatest accuracy and least uncertainty, we propose that the ZH14 parameterization is currently superior for incorporation into atmospheric transport models.

  8. Bayesian inference of uncertainties in precipitation-streamflow modeling in a snow affected catchment

    NASA Astrophysics Data System (ADS)

    Koskela, J. J.; Croke, B. W. F.; Koivusalo, H.; Jakeman, A. J.; Kokkonen, T.

    2012-11-01

    Bayesian inference is used to study the effect of precipitation and model structural uncertainty on estimates of model parameters and confidence limits of predictive variables in a conceptual rainfall-runoff model in the snow-fed Rudbäck catchment (142 ha) in southern Finland. The IHACRES model is coupled with a simple degree day model to account for snow accumulation and melt. The posterior probability distribution of the model parameters is sampled by using the Differential Evolution Adaptive Metropolis (DREAM(ZS)) algorithm and the generalized likelihood function. Precipitation uncertainty is taken into account by introducing additional latent variables that were used as multipliers for individual storm events. Results suggest that occasional snow water equivalent (SWE) observations together with daily streamflow observations do not contain enough information to simultaneously identify model parameters, precipitation uncertainty and model structural uncertainty in the Rudbäck catchment. The addition of an autoregressive component to account for model structure error and latent variables having uniform priors to account for input uncertainty lead to dubious posterior distributions of model parameters. Thus our hypothesis that informative priors for latent variables could be replaced by additional SWE data could not be confirmed. The model was found to work adequately in 1-day-ahead simulation mode, but the results were poor in the simulation batch mode. This was caused by the interaction of parameters that were used to describe different sources of uncertainty. The findings may have lessons for other cases where parameterizations are similarly high in relation to available prior information.

  9. A consistent framework to predict mass fluxes and depletion times for DNAPL contaminations in heterogeneous aquifers under uncertainty

    NASA Astrophysics Data System (ADS)

    Koch, Jonas; Nowak, Wolfgang

    2013-04-01

    At many hazardous waste sites and accidental spills, dense non-aqueous phase liquids (DNAPLs) such as TCE, PCE, or TCA have been released into the subsurface. Once a DNAPL is released into the subsurface, it serves as persistent source of dissolved-phase contamination. In chronological order, the DNAPL migrates through the porous medium and penetrates the aquifer, it forms a complex pattern of immobile DNAPL saturation, it dissolves into the groundwater and forms a contaminant plume, and it slowly depletes and bio-degrades in the long-term. In industrial countries the number of such contaminated sites is tremendously high to the point that a ranking from most risky to least risky is advisable. Such a ranking helps to decide whether a site needs to be remediated or may be left to natural attenuation. Both the ranking and the designing of proper remediation or monitoring strategies require a good understanding of the relevant physical processes and their inherent uncertainty. To this end, we conceptualize a probabilistic simulation framework that estimates probability density functions of mass discharge, source depletion time, and critical concentration values at crucial target locations. Furthermore, it supports the inference of contaminant source architectures from arbitrary site data. As an essential novelty, the mutual dependencies of the key parameters and interacting physical processes are taken into account throughout the whole simulation. In an uncertain and heterogeneous subsurface setting, we identify three key parameter fields: the local velocities, the hydraulic permeabilities and the DNAPL phase saturations. Obviously, these parameters depend on each other during DNAPL infiltration, dissolution and depletion. In order to highlight the importance of these mutual dependencies and interactions, we present results of several model set ups where we vary the physical and stochastic dependencies of the input parameters and simulated processes. Under these changes, the probability density functions demonstrate strong statistical shifts in their expected values and in their uncertainty. Considering the uncertainties of all key parameters but neglecting their interactions overestimates the output uncertainty. However, consistently using all available physical knowledge when assigning input parameters and simulating all relevant interactions of the involved processes reduces the output uncertainty significantly back down to useful and plausible ranges. When using our framework in an inverse setting, omitting a parameter dependency within a crucial physical process would lead to physical meaningless identified parameters. Thus, we conclude that the additional complexity we propose is both necessary and adequate. Overall, our framework provides a tool for reliable and plausible prediction, risk assessment, and model based decision support for DNAPL contaminated sites.

  10. Statistical Accounting for Uncertainty in Modeling Transport in Environmental Systems

    EPA Science Inventory

    Models frequently are used to predict the future extent of ground-water contamination, given estimates of their input parameters and forcing functions. Although models have a well established scientific basis for understanding the interactions between complex phenomena and for g...

  11. Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.

    2007-01-01

    To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.

  12. Robust decentralized controller for minimizing coupling effect in single inductor multiple output DC-DC converter operating in continuous conduction mode.

    PubMed

    Medeiros, Renan Landau Paiva de; Barra, Walter; Bessa, Iury Valente de; Chaves Filho, João Edgar; Ayres, Florindo Antonio de Cavalho; Neves, Cleonor Crescêncio das

    2018-02-01

    This paper describes a novel robust decentralized control design methodology for a single inductor multiple output (SIMO) DC-DC converter. Based on a nominal multiple input multiple output (MIMO) plant model and performance requirements, a pairing input-output analysis is performed to select the suitable input to control each output aiming to attenuate the loop coupling. Thus, the plant uncertainty limits are selected and expressed in interval form with parameter values of the plant model. A single inductor dual output (SIDO) DC-DC buck converter board is developed for experimental tests. The experimental results show that the proposed methodology can maintain a desirable performance even in the presence of parametric uncertainties. Furthermore, the performance indexes calculated from experimental data show that the proposed methodology outperforms classical MIMO control techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  13. CALiPER Exploratory Study: Accounting for Uncertainty in Lumen Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergman, Rolf; Paget, Maria L.; Richman, Eric E.

    2011-03-31

    With a well-defined and shared understanding of uncertainty in lumen measurements, testing laboratories can better evaluate their processes, contributing to greater consistency and credibility of lighting testing a key component of the U.S. Department of Energy (DOE) Commercially Available LED Product Evaluation and Reporting (CALiPER) program. Reliable lighting testing is a crucial underlying factor contributing toward the success of many energy-efficient lighting efforts, such as the DOE GATEWAY demonstrations, Lighting Facts Label, ENERGY STAR® energy efficient lighting programs, and many others. Uncertainty in measurements is inherent to all testing methodologies, including photometric and other lighting-related testing. Uncertainty exists for allmore » equipment, processes, and systems of measurement in individual as well as combined ways. A major issue with testing and the resulting accuracy of the tests is the uncertainty of the complete process. Individual equipment uncertainties are typically identified, but their relative value in practice and their combined value with other equipment and processes in the same test are elusive concepts, particularly for complex types of testing such as photometry. The total combined uncertainty of a measurement result is important for repeatable and comparative measurements for light emitting diode (LED) products in comparison with other technologies as well as competing products. This study provides a detailed and step-by-step method for determining uncertainty in lumen measurements, working closely with related standards efforts and key industry experts. This report uses the structure proposed in the Guide to Uncertainty Measurements (GUM) for evaluating and expressing uncertainty in measurements. The steps of the procedure are described and a spreadsheet format adapted for integrating sphere and goniophotometric uncertainty measurements is provided for entering parameters, ordering the information, calculating intermediate values and, finally, obtaining expanded uncertainties. Using this basis and examining each step of the photometric measurement and calibration methods, mathematical uncertainty models are developed. Determination of estimated values of input variables is discussed. Guidance is provided for the evaluation of the standard uncertainties of each input estimate, covariances associated with input estimates and the calculation of the result measurements. With this basis, the combined uncertainty of the measurement results and finally, the expanded uncertainty can be determined.« less

  14. Power oscillation suppression by robust SMES in power system with large wind power penetration

    NASA Astrophysics Data System (ADS)

    Ngamroo, Issarachai; Cuk Supriyadi, A. N.; Dechanupaprittha, Sanchai; Mitani, Yasunori

    2009-01-01

    The large penetration of wind farm into interconnected power systems may cause the severe problem of tie-line power oscillations. To suppress power oscillations, the superconducting magnetic energy storage (SMES) which is able to control active and reactive powers simultaneously, can be applied. On the other hand, several generating and loading conditions, variation of system parameters, etc., cause uncertainties in the system. The SMES controller designed without considering system uncertainties may fail to suppress power oscillations. To enhance the robustness of SMES controller against system uncertainties, this paper proposes a robust control design of SMES by taking system uncertainties into account. The inverse additive perturbation is applied to represent the unstructured system uncertainties and included in power system modeling. The configuration of active and reactive power controllers is the first-order lead-lag compensator with single input feedback. To tune the controller parameters, the optimization problem is formulated based on the enhancement of robust stability margin. The particle swarm optimization is used to solve the problem and achieve the controller parameters. Simulation studies in the six-area interconnected power system with wind farms confirm the robustness of the proposed SMES under various operating conditions.

  15. Uncertainty Analysis of Inertial Model Attitude Sensor Calibration and Application with a Recommended New Calibration Method

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.

  16. Inferring pathological states in cortical neuron microcircuits.

    PubMed

    Rydzewski, Jakub; Nowak, Wieslaw; Nicosia, Giuseppe

    2015-12-07

    The brain activity is to a large extent determined by states of neural cortex microcircuits. Unfortunately, accuracy of results from neural circuits׳ mathematical models is often biased by the presence of uncertainties in underlying experimental data. Moreover, due to problems with uncertainties identification in a multidimensional parameters space, it is almost impossible to classify states of the neural cortex, which correspond to a particular set of the parameters. Here, we develop a complete methodology for determining uncertainties and the novel protocol for classifying all states in any neuroinformatic model. Further, we test this protocol on the mathematical, nonlinear model of such a microcircuit developed by Giugliano et al. (2008) and applied in the experimental data analysis of Huntington׳s disease. Up to now, the link between parameter domains in the mathematical model of Huntington׳s disease and the pathological states in cortical microcircuits has remained unclear. In this paper we precisely identify all the uncertainties, the most crucial input parameters and domains that drive the system into an unhealthy state. The scheme proposed here is general and can be easily applied to other mathematical models of biological phenomena. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Fuzzy Stochastic Petri Nets for Modeling Biological Systems with Uncertain Kinetic Parameters

    PubMed Central

    Liu, Fei; Heiner, Monika; Yang, Ming

    2016-01-01

    Stochastic Petri nets (SPNs) have been widely used to model randomness which is an inherent feature of biological systems. However, for many biological systems, some kinetic parameters may be uncertain due to incomplete, vague or missing kinetic data (often called fuzzy uncertainty), or naturally vary, e.g., between different individuals, experimental conditions, etc. (often called variability), which has prevented a wider application of SPNs that require accurate parameters. Considering the strength of fuzzy sets to deal with uncertain information, we apply a specific type of stochastic Petri nets, fuzzy stochastic Petri nets (FSPNs), to model and analyze biological systems with uncertain kinetic parameters. FSPNs combine SPNs and fuzzy sets, thereby taking into account both randomness and fuzziness of biological systems. For a biological system, SPNs model the randomness, while fuzzy sets model kinetic parameters with fuzzy uncertainty or variability by associating each parameter with a fuzzy number instead of a crisp real value. We introduce a simulation-based analysis method for FSPNs to explore the uncertainties of outputs resulting from the uncertainties associated with input parameters, which works equally well for bounded and unbounded models. We illustrate our approach using a yeast polarization model having an infinite state space, which shows the appropriateness of FSPNs in combination with simulation-based analysis for modeling and analyzing biological systems with uncertain information. PMID:26910830

  18. Management of groundwater in-situ bioremediation system using reactive transport modelling under parametric uncertainty: field scale application

    NASA Astrophysics Data System (ADS)

    Verardo, E.; Atteia, O.; Rouvreau, L.

    2015-12-01

    In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.

  19. Streamflow hindcasting in European river basins via multi-parametric ensemble of the mesoscale hydrologic model (mHM)

    NASA Astrophysics Data System (ADS)

    Noh, Seong Jin; Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis

    2016-04-01

    There have been tremendous improvements in distributed hydrologic modeling (DHM) which made a process-based simulation with a high spatiotemporal resolution applicable on a large spatial scale. Despite of increasing information on heterogeneous property of a catchment, DHM is still subject to uncertainties inherently coming from model structure, parameters and input forcing. Sequential data assimilation (DA) may facilitate improved streamflow prediction via DHM using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is, however, often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. If parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by DHM may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we present a global multi-parametric ensemble approach to incorporate parametric uncertainty of DHM in DA to improve streamflow predictions. To effectively represent and control uncertainty of high-dimensional parameters with limited number of ensemble, MPR method is incorporated with DA. Lagged particle filtering is utilized to consider the response times and non-Gaussian characteristics of internal hydrologic processes. The hindcasting experiments are implemented to evaluate impacts of the proposed DA method on streamflow predictions in multiple European river basins having different climate and catchment characteristics. Because augmentation of parameters is not required within an assimilation window, the approach could be stable with limited ensemble members and viable for practical uses.

  20. Large Uncertainty in Estimating pCO2 From Carbonate Equilibria in Lakes

    NASA Astrophysics Data System (ADS)

    Golub, Malgorzata; Desai, Ankur R.; McKinley, Galen A.; Remucal, Christina K.; Stanley, Emily H.

    2017-11-01

    Most estimates of carbon dioxide (CO2) evasion from freshwaters rely on calculating partial pressure of aquatic CO2 (pCO2) from two out of three CO2-related parameters using carbonate equilibria. However, the pCO2 uncertainty has not been systematically evaluated across multiple lake types and equilibria. We quantified random errors in pH, dissolved inorganic carbon, alkalinity, and temperature from the North Temperate Lakes Long-Term Ecological Research site in four lake groups across a broad gradient of chemical composition. These errors were propagated onto pCO2 calculated from three carbonate equilibria, and for overlapping observations, compared against uncertainties in directly measured pCO2. The empirical random errors in CO2-related parameters were mostly below 2% of their median values. Resulting random pCO2 errors ranged from ±3.7% to ±31.5% of the median depending on alkalinity group and choice of input parameter pairs. Temperature uncertainty had a negligible effect on pCO2. When compared with direct pCO2 measurements, all parameter combinations produced biased pCO2 estimates with less than one third of total uncertainty explained by random pCO2 errors, indicating that systematic uncertainty dominates over random error. Multidecadal trend of pCO2 was difficult to reconstruct from uncertain historical observations of CO2-related parameters. Given poor precision and accuracy of pCO2 estimates derived from virtually any combination of two CO2-related parameters, we recommend direct pCO2 measurements where possible. To achieve consistently robust estimates of CO2 emissions from freshwater components of terrestrial carbon balances, future efforts should focus on improving accuracy and precision of CO2-related parameters (including direct pCO2) measurements and associated pCO2 calculations.

  1. Entry dynamics of space shuttle orbiter with lateral-directional stability and control uncertainties at supersonic and hypersonic speeds

    NASA Technical Reports Server (NTRS)

    Stone, H. W.; Powell, R. W.

    1977-01-01

    A six-degree-of-freedom simulation analysis was conducted to examine the effects of the lateral-directional static aerodynamic stability and control uncertainties on the performance of the automatic (no manual inputs) entry-guidance and control systems of the space shuttle orbiter. To establish the acceptable boundaries of the uncertainties, the static aerodynamic characteristics were varied either by applying a multiplier to the aerodynamic parameter or by adding an increment. Control-system modifications were identified that decrease the sensitivity to off-nominal aerodynamics. With these modifications, the acceptable aerodynamic boundaries were determined.

  2. Uncertainty Analysis of Runoff Simulations and Parameter Identifiability in the Community Land Model – Evidence from MOPEX Basins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Maoyi; Hou, Zhangshuan; Leung, Lai-Yung R.

    2013-12-01

    With the emergence of earth system models as important tools for understanding and predicting climate change and implications to mitigation and adaptation, it has become increasingly important to assess the fidelity of the land component within earth system models to capture realistic hydrological processes and their response to the changing climate and quantify the associated uncertainties. This study investigates the sensitivity of runoff simulations to major hydrologic parameters in version 4 of the Community Land Model (CLM4) by integrating CLM4 with a stochastic exploratory sensitivity analysis framework at 20 selected watersheds from the Model Parameter Estimation Experiment (MOPEX) spanning amore » wide range of climate and site conditions. We found that for runoff simulations, the most significant parameters are those related to the subsurface runoff parameterizations. Soil texture related parameters and surface runoff parameters are of secondary significance. Moreover, climate and soil conditions play important roles in the parameter sensitivity. In general, site conditions within water-limited hydrologic regimes and with finer soil texture result in stronger sensitivity of output variables, such as runoff and its surface and subsurface components, to the input parameters in CLM4. This study demonstrated the feasibility of parameter inversion for CLM4 using streamflow observations to improve runoff simulations. By ranking the significance of the input parameters, we showed that the parameter set dimensionality could be reduced for CLM4 parameter calibration under different hydrologic and climatic regimes so that the inverse problem is less ill posed.« less

  3. Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach

    NASA Astrophysics Data System (ADS)

    Chowdhury, R.; Adhikari, S.

    2012-10-01

    Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.

  4. Effect of Temporal and Spatial Rainfall Resolution on HSPF Predictive Performance and Parameter Estimation

    EPA Science Inventory

    Watershed scale rainfall‐runoff models are used for environmental management and regulatory modeling applications, but their effectiveness are limited by predictive uncertainties associated with model input data. This study evaluated the effect of temporal and spatial rainfall re...

  5. Real-time Ensemble Forecasting of Coronal Mass Ejections using the WSA-ENLIL+Cone Model

    NASA Astrophysics Data System (ADS)

    Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; MacNeice, P. J.; Rastaetter, L.; Kuznetsova, M. M.; Odstrcil, D.

    2013-12-01

    Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions due to uncertainties in determining CME input parameters. Ensemble modeling of CME propagation in the heliosphere is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL cone model available at the Community Coordinated Modeling Center (CCMC). SWRC is an in-house research-based operations team at the CCMC which provides interplanetary space weather forecasting for NASA's robotic missions and performs real-time model validation. A distribution of n (routinely n=48) CME input parameters are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest (satellites or planets), including a probability distribution of CME shock arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). Ensemble simulations have been performed experimentally in real-time at the CCMC since January 2013. We present the results of ensemble simulations for a total of 15 CME events, 10 of which were performed in real-time. The observed CME arrival was within the range of ensemble arrival time predictions for 5 out of the 12 ensemble runs containing hits. The average arrival time prediction was computed for each of the twelve ensembles predicting hits and using the actual arrival time an average absolute error of 8.20 hours was found for all twelve ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling setup was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME.

  6. Uncertainties of parameterized surface downward clear-sky shortwave and all-sky longwave radiation.

    NASA Astrophysics Data System (ADS)

    Gubler, S.; Gruber, S.; Purves, R. S.

    2012-06-01

    As many environmental models rely on simulating the energy balance at the Earth's surface based on parameterized radiative fluxes, knowledge of the inherent model uncertainties is important. In this study we evaluate one parameterization of clear-sky direct, diffuse and global shortwave downward radiation (SDR) and diverse parameterizations of clear-sky and all-sky longwave downward radiation (LDR). In a first step, SDR is estimated based on measured input variables and estimated atmospheric parameters for hourly time steps during the years 1996 to 2008. Model behaviour is validated using the high quality measurements of six Alpine Surface Radiation Budget (ASRB) stations in Switzerland covering different elevations, and measurements of the Swiss Alpine Climate Radiation Monitoring network (SACRaM) in Payerne. In a next step, twelve clear-sky LDR parameterizations are calibrated using the ASRB measurements. One of the best performing parameterizations is elected to estimate all-sky LDR, where cloud transmissivity is estimated using measured and modeled global SDR during daytime. In a last step, the performance of several interpolation methods is evaluated to determine the cloud transmissivity in the night. We show that clear-sky direct, diffuse and global SDR is adequately represented by the model when using measurements of the atmospheric parameters precipitable water and aerosol content at Payerne. If the atmospheric parameters are estimated and used as a fix value, the relative mean bias deviance (MBD) and the relative root mean squared deviance (RMSD) of the clear-sky global SDR scatter between between -2 and 5%, and 7 and 13% within the six locations. The small errors in clear-sky global SDR can be attributed to compensating effects of modeled direct and diffuse SDR since an overestimation of aerosol content in the atmosphere results in underestimating the direct, but overestimating the diffuse SDR. Calibration of LDR parameterizations to local conditions reduces MBD and RMSD strongly compared to using the published values of the parameters, resulting in relative MBD and RMSD of less than 5% respectively 10% for the best parameterizations. The best results to estimate cloud transmissivity during nighttime were obtained by linearly interpolating the average of the cloud transmissivity of the four hours of the preceeding afternoon and the following morning. Model uncertainty can be caused by different errors such as code implementation, errors in input data and in estimated parameters, etc. The influence of the latter (errors in input data and model parameter uncertainty) on model outputs is determined using Monte Carlo. Model uncertainty is provided as the relative standard deviation σrel of the simulated frequency distributions of the model outputs. An optimistic estimate of the relative uncertainty σrel resulted in 10% for the clear-sky direct, 30% for diffuse, 3% for global SDR, and 3% for the fitted all-sky LDR.

  7. A Cascade Approach to Uncertainty Estimation for the Hydrological Simulation of Droughts

    NASA Astrophysics Data System (ADS)

    Smith, Katie; Tanguy, Maliko; Parry, Simon; Prudhomme, Christel

    2016-04-01

    Uncertainty poses a significant challenge in environmental research and the characterisation and quantification of uncertainty has become a research priority over the past decade. Studies of extreme events are particularly affected by issues of uncertainty. This study focusses on the sources of uncertainty in the modelling of streamflow droughts in the United Kingdom. Droughts are a poorly understood natural hazard with no universally accepted definition. Meteorological, hydrological and agricultural droughts have different meanings and vary both spatially and temporally, yet each is inextricably linked. The work presented here is part of two extensive interdisciplinary projects investigating drought reconstruction and drought forecasting capabilities in the UK. Lumped catchment models are applied to simulate streamflow drought, and uncertainties from 5 different sources are investigated: climate input data, potential evapotranspiration (PET) method, hydrological model, within model structure, and model parameterisation. Latin Hypercube sampling is applied to develop large parameter ensembles for each model structure which are run using parallel computing on a high performance computer cluster. Parameterisations are assessed using a multi-objective evaluation criteria which includes both general and drought performance metrics. The effect of different climate input data and PET methods on model output is then considered using the accepted model parameterisations. The uncertainty from each of the sources creates a cascade, and when presented as such the relative importance of each aspect of uncertainty can be determined.

  8. A polynomial chaos approach to the analysis of vehicle dynamics under uncertainty

    NASA Astrophysics Data System (ADS)

    Kewlani, Gaurav; Crawford, Justin; Iagnemma, Karl

    2012-05-01

    The ability of ground vehicles to quickly and accurately analyse their dynamic response to a given input is critical to their safety and efficient autonomous operation. In field conditions, significant uncertainty is associated with terrain and/or vehicle parameter estimates, and this uncertainty must be considered in the analysis of vehicle motion dynamics. Here, polynomial chaos approaches that explicitly consider parametric uncertainty during modelling of vehicle dynamics are presented. They are shown to be computationally more efficient than the standard Monte Carlo scheme, and experimental results compared with the simulation results performed on ANVEL (a vehicle simulator) indicate that the method can be utilised for efficient and accurate prediction of vehicle motion in realistic scenarios.

  9. Modeling uncertainty and correlation in soil properties using Restricted Pairing and implications for ensemble-based hillslope-scale soil moisture and temperature estimation

    NASA Astrophysics Data System (ADS)

    Flores, A. N.; Entekhabi, D.; Bras, R. L.

    2007-12-01

    Soil hydraulic and thermal properties (SHTPs) affect both the rate of moisture redistribution in the soil column and the volumetric soil water capacity. Adequately constraining these properties through field and lab analysis to parameterize spatially-distributed hydrology models is often prohibitively expensive. Because SHTPs vary significantly at small spatial scales individual soil samples are also only reliably indicative of local conditions, and these properties remain a significant source of uncertainty in soil moisture and temperature estimation. In ensemble-based soil moisture data assimilation, uncertainty in the model-produced prior estimate due to associated uncertainty in SHTPs must be taken into account to avoid under-dispersive ensembles. To treat SHTP uncertainty for purposes of supplying inputs to a distributed watershed model we use the restricted pairing (RP) algorithm, an extension of Latin Hypercube (LH) sampling. The RP algorithm generates an arbitrary number of SHTP combinations by sampling the appropriate marginal distributions of the individual soil properties using the LH approach, while imposing a target rank correlation among the properties. A previously-published meta- database of 1309 soils representing 12 textural classes is used to fit appropriate marginal distributions to the properties and compute the target rank correlation structure, conditioned on soil texture. Given categorical soil textures, our implementation of the RP algorithm generates an arbitrarily-sized ensemble of realizations of the SHTPs required as input to the TIN-based Realtime Integrated Basin Simulator with vegetation dynamics (tRIBS+VEGGIE) distributed parameter ecohydrology model. Soil moisture ensembles simulated with RP- generated SHTPs exhibit less variance than ensembles simulated with SHTPs generated by a scheme that neglects correlation among properties. Neglecting correlation among SHTPs can lead to physically unrealistic combinations of parameters that exhibit implausible hydrologic behavior when input to the tRIBS+VEGGIE model.

  10. Measurements for liquid rocket engine performance code verification

    NASA Technical Reports Server (NTRS)

    Praharaj, Sarat C.; Palko, Richard L.

    1986-01-01

    The goal of the rocket engine performance code verification tests is to obtain the I sub sp with an accuracy of 0.25% or less. This needs to be done during the sequence of four related tests (two reactive and two hot gas simulation) to best utilize the loss separation technique recommended in this study. In addition to I sub sp, the measurements of the input and output parameters for the codes are needed. This study has shown two things in regard to obtaining the I sub sp uncertainty within the 0.25% target. First, this target is generally not being realized at the present time, and second, the instrumentation and testing technology does exist to obtain this 0.25% uncertainty goal. However, to achieve this goal will require carefully planned, designed, and conducted testing. In addition, the test-stand (or system) dynamics must be evaluated in the pre-test and post-test phases of the design of the experiment and data analysis, respectively always keeping in mind that a .25% overall uncertainty in I sub sp is targeted. A table gives the maximum allowable uncertainty required for obtaining I sub sp with 0.25% uncertainty, the currently-quoted instrument specification, and present test uncertainty for the parameters. In general, it appears that measurement of the mass flow parameter within the required uncertainty may be the most difficult.

  11. Impact of clinical input variable uncertainties on ten-year atherosclerotic cardiovascular disease risk using new pooled cohort equations.

    PubMed

    Gupta, Himanshu; Schiros, Chun G; Sharifov, Oleg F; Jain, Apurva; Denney, Thomas S

    2016-08-31

    Recently released American College of Cardiology/American Heart Association (ACC/AHA) guideline recommends the Pooled Cohort equations for evaluating atherosclerotic cardiovascular risk of individuals. The impact of the clinical input variable uncertainties on the estimates of ten-year cardiovascular risk based on ACC/AHA guidelines is not known. Using a publicly available the National Health and Nutrition Examination Survey dataset (2005-2010), we computed maximum and minimum ten-year cardiovascular risks by assuming clinically relevant variations/uncertainties in input of age (0-1 year) and ±10 % variation in total-cholesterol, high density lipoprotein- cholesterol, and systolic blood pressure and by assuming uniform distribution of the variance of each variable. We analyzed the changes in risk category compared to the actual inputs at 5 % and 7.5 % risk limits as these limits define the thresholds for consideration of drug therapy in the new guidelines. The new-pooled cohort equations for risk estimation were implemented in a custom software package. Based on our input variances, changes in risk category were possible in up to 24 % of the population cohort at both 5 % and 7.5 % risk boundary limits. This trend was consistently noted across all subgroups except in African American males where most of the cohort had ≥7.5 % baseline risk regardless of the variation in the variables. The uncertainties in the input variables can alter the risk categorization. The impact of these variances on the ten-year risk needs to be incorporated into the patient/clinician discussion and clinical decision making. Incorporating good clinical practices for the measurement of critical clinical variables and robust standardization of laboratory parameters to more stringent reference standards is extremely important for successful implementation of the new guidelines. Furthermore, ability to customize the risk calculator inputs to better represent unique clinical circumstances specific to individual needs would be highly desirable in the future versions of the risk calculator.

  12. Robust control of seismically excited cable stayed bridges with MR dampers

    NASA Astrophysics Data System (ADS)

    YeganehFallah, Arash; Khajeh Ahamd Attari, Nader

    2017-03-01

    In recent decades active and semi-active structural control are becoming attractive alternatives for enhancing performance of civil infrastructures subjected to seismic and winds loads. However, in order to have reliable active and semi-active control, there is a need to include information of uncertainties in design of the controller. In real world for civil structures, parameters such as loading places, stiffness, mass and damping are time variant and uncertain. These uncertainties in many cases model as parametric uncertainties. The motivation of this research is to design a robust controller for attenuating the vibrational responses of civil infrastructures, regarding their dynamical uncertainties. Uncertainties in structural dynamic’s parameters are modeled as affine uncertainties in state space modeling. These uncertainties are decoupled from the system through Linear Fractional Transformation (LFT) and are assumed to be unknown input to the system but norm bounded. The robust H ∞ controller is designed for the decoupled system to regulate the evaluation outputs and it is robust to effects of uncertainties, disturbance and sensors noise. The cable stayed bridge benchmark which is equipped with MR damper is considered for the numerical simulation. The simulated results show that the proposed robust controller can effectively mitigate undesired uncertainties effects on systems’ responds under seismic loading.

  13. Traceable measurements of the electrical parameters of solid-state lighting products

    NASA Astrophysics Data System (ADS)

    Zhao, D.; Rietveld, G.; Braun, J.-P.; Overney, F.; Lippert, T.; Christensen, A.

    2016-12-01

    In order to perform traceable measurements of the electrical parameters of solid-state lighting (SSL) products, it is necessary to technically adequately define the measurement procedures and to identify the relevant uncertainty sources. The present published written standard for SSL products specifies test conditions, but it lacks an explanation of how adequate these test conditions are. More specifically, both an identification of uncertainty sources and a quantitative uncertainty analysis are absent. This paper fills the related gap in the present written standard. New uncertainty sources with respect to conventional lighting sources are determined and their effects are quantified. It shows that for power measurements, the main uncertainty sources are temperature deviation, power supply voltage distortion, and instability of the SSL product. For current RMS measurements, the influence of bandwidth, shunt resistor, power supply source impedance and ac frequency flatness are significant as well. The measurement uncertainty depends not only on the test equipment but is also a function of the characteristics of the device under test (DUT), for example, current harmonics spectrum and input impedance. Therefore, an online calculation tool is provided to help non-electrical experts. Following our procedures, unrealistic uncertainty estimations, unnecessary procedures and expensive equipment can be prevented.

  14. Uncertainty Analysis of Simulated Hydraulic Fracturing

    NASA Astrophysics Data System (ADS)

    Chen, M.; Sun, Y.; Fu, P.; Carrigan, C. R.; Lu, Z.

    2012-12-01

    Artificial hydraulic fracturing is being used widely to stimulate production of oil, natural gas, and geothermal reservoirs with low natural permeability. Optimization of field design and operation is limited by the incomplete characterization of the reservoir, as well as the complexity of hydrological and geomechanical processes that control the fracturing. Thus, there are a variety of uncertainties associated with the pre-existing fracture distribution, rock mechanics, and hydraulic-fracture engineering that require evaluation of their impact on the optimized design. In this study, a multiple-stage scheme was employed to evaluate the uncertainty. We first define the ranges and distributions of 11 input parameters that characterize the natural fracture topology, in situ stress, geomechanical behavior of the rock matrix and joint interfaces, and pumping operation, to cover a wide spectrum of potential conditions expected for a natural reservoir. These parameters were then sampled 1,000 times in an 11-dimensional parameter space constrained by the specified ranges using the Latin-hypercube method. These 1,000 parameter sets were fed into the fracture simulators, and the outputs were used to construct three designed objective functions, i.e. fracture density, opened fracture length and area density. Using PSUADE, three response surfaces (11-dimensional) of the objective functions were developed and global sensitivity was analyzed to identify the most sensitive parameters for the objective functions representing fracture connectivity, which are critical for sweep efficiency of the recovery process. The second-stage high resolution response surfaces were constructed with dimension reduced to the number of the most sensitive parameters. An additional response surface with respect to the objective function of the fractal dimension for fracture distributions was constructed in this stage. Based on these response surfaces, comprehensive uncertainty analyses were conducted among input parameters and objective functions. In addition, reduced-order emulation models resulting from this analysis can be used for optimal control of hydraulic fracturing. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  15. Prioritizing Risks and Uncertainties from Intentional Release of Selected Category A Pathogens

    PubMed Central

    Hong, Tao; Gurian, Patrick L.; Huang, Yin; Haas, Charles N.

    2012-01-01

    This paper synthesizes available information on five Category A pathogens (Bacillus anthracis, Yersinia pestis, Francisella tularensis, Variola major and Lassa) to develop quantitative guidelines for how environmental pathogen concentrations may be related to human health risk in an indoor environment. An integrated model of environmental transport and human health exposure to biological pathogens is constructed which 1) includes the effects of environmental attenuation, 2) considers fomite contact exposure as well as inhalational exposure, and 3) includes an uncertainty analysis to identify key input uncertainties, which may inform future research directions. The findings provide a framework for developing the many different environmental standards that are needed for making risk-informed response decisions, such as when prophylactic antibiotics should be distributed, and whether or not a contaminated area should be cleaned up. The approach is based on the assumption of uniform mixing in environmental compartments and is thus applicable to areas sufficiently removed in time and space from the initial release that mixing has produced relatively uniform concentrations. Results indicate that when pathogens are released into the air, risk from inhalation is the main component of the overall risk, while risk from ingestion (dermal contact for B. anthracis) is the main component of the overall risk when pathogens are present on surfaces. Concentrations sampled from untracked floor, walls and the filter of heating ventilation and air conditioning (HVAC) system are proposed as indicators of previous exposure risk, while samples taken from touched surfaces are proposed as indicators of future risk if the building is reoccupied. A Monte Carlo uncertainty analysis is conducted and input-output correlations used to identify important parameter uncertainties. An approach is proposed for integrating these quantitative assessments of parameter uncertainty with broader, qualitative considerations to identify future research priorities. PMID:22412915

  16. UQTk Version 3.0.3 User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sargsyan, Khachik; Safta, Cosmin; Chowdhary, Kamaljit Singh

    2017-05-01

    The UQ Toolkit (UQTk) is a collection of libraries and tools for the quantification of uncertainty in numerical model predictions. Version 3.0.3 offers intrusive and non-intrusive methods for propagating input uncertainties through computational models, tools for sen- sitivity analysis, methods for sparse surrogate construction, and Bayesian inference tools for inferring parameters from experimental data. This manual discusses the download and installation process for UQTk, provides pointers to the UQ methods used in the toolkit, and describes some of the examples provided with the toolkit.

  17. Robust fixed-time synchronization of delayed Cohen-Grossberg neural networks.

    PubMed

    Wan, Ying; Cao, Jinde; Wen, Guanghui; Yu, Wenwu

    2016-01-01

    The fixed-time master-slave synchronization of Cohen-Grossberg neural networks with parameter uncertainties and time-varying delays is investigated. Compared with finite-time synchronization where the convergence time relies on the initial synchronization errors, the settling time of fixed-time synchronization can be adjusted to desired values regardless of initial conditions. Novel synchronization control strategy for the slave neural network is proposed. By utilizing the Filippov discontinuous theory and Lyapunov stability theory, some sufficient schemes are provided for selecting the control parameters to ensure synchronization with required convergence time and in the presence of parameter uncertainties. Corresponding criteria for tuning control inputs are also derived for the finite-time synchronization. Finally, two numerical examples are given to illustrate the validity of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Uncertainty quantification analysis of the dynamics of an electrostatically actuated microelectromechanical switch model

    NASA Astrophysics Data System (ADS)

    Snow, Michael G.; Bajaj, Anil K.

    2015-08-01

    This work presents an uncertainty quantification (UQ) analysis of a comprehensive model for an electrostatically actuated microelectromechanical system (MEMS) switch. The goal is to elucidate the effects of parameter variations on certain key performance characteristics of the switch. A sufficiently detailed model of the electrostatically actuated switch in the basic configuration of a clamped-clamped beam is developed. This multi-physics model accounts for various physical effects, including the electrostatic fringing field, finite length of electrodes, squeeze film damping, and contact between the beam and the dielectric layer. The performance characteristics of immediate interest are the static and dynamic pull-in voltages for the switch. Numerical approaches for evaluating these characteristics are developed and described. Using Latin Hypercube Sampling and other sampling methods, the model is evaluated to find these performance characteristics when variability in the model's geometric and physical parameters is specified. Response surfaces of these results are constructed via a Multivariate Adaptive Regression Splines (MARS) technique. Using a Direct Simulation Monte Carlo (DSMC) technique on these response surfaces gives smooth probability density functions (PDFs) of the outputs characteristics when input probability characteristics are specified. The relative variation in the two pull-in voltages due to each of the input parameters is used to determine the critical parameters.

  19. Uncertainty and Sensitivity of Direct Economic Flood Damages: the FloodRisk Free and Open-Source Software

    NASA Astrophysics Data System (ADS)

    Albano, R.; Sole, A.; Mancusi, L.; Cantisani, A.; Perrone, A.

    2017-12-01

    The considerable increase of flood damages in the the past decades has shifted in Europe the attention from protection against floods to managing flood risks. In this context, the expected damages assessment represents a crucial information within the overall flood risk management process. The present paper proposes an open source software, called FloodRisk, that is able to operatively support stakeholders in the decision making processes with a what-if approach by carrying out the rapid assessment of the flood consequences, in terms of direct economic damage and loss of human lives. The evaluation of the damage scenarios, trough the use of the GIS software proposed here, is essential for cost-benefit or multi-criteria analysis of risk mitigation alternatives. However, considering that quantitative assessment of flood damages scenarios is characterized by intrinsic uncertainty, a scheme has been developed to identify and quantify the role of the input parameters in the total uncertainty of flood loss model application in urban areas with mild terrain and complex topography. By the concept of parallel models, the contribution of different module and input parameters to the total uncertainty is quantified. The results of the present case study have exhibited a high epistemic uncertainty on the damage estimation module and, in particular, on the type and form of the utilized damage functions, which have been adapted and transferred from different geographic and socio-economic contexts because there aren't depth-damage functions that are specifically developed for Italy. Considering that uncertainty and sensitivity depend considerably on local characteristics, the epistemic uncertainty associated with the risk estimate is reduced by introducing additional information into the risk analysis. In the light of the obtained results, it is evident the need to produce and disseminate (open) data to develop micro-scale vulnerability curves. Moreover, the urgent need to push forward research into the implementation of methods and models for the assimilation of uncertainties in decision-making processes emerges.

  20. Uncertainty Quantification given Discontinuous Climate Model Response and a Limited Number of Model Runs

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Safta, C.; Debusschere, B.; Najm, H.

    2010-12-01

    Uncertainty quantification in complex climate models is challenged by the sparsity of available climate model predictions due to the high computational cost of model runs. Another feature that prevents classical uncertainty analysis from being readily applicable is bifurcative behavior in climate model response with respect to certain input parameters. A typical example is the Atlantic Meridional Overturning Circulation. The predicted maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. We outline a methodology for uncertainty quantification given discontinuous model response and a limited number of model runs. Our approach is two-fold. First we detect the discontinuity with Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve shape and location for arbitrarily distributed input parameter values. Then, we construct spectral representations of uncertainty, using Polynomial Chaos (PC) expansions on either side of the discontinuity curve, leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification. The approach is enabled by a Rosenblatt transformation that maps each side of the discontinuity to regular domains where desirable orthogonality properties for the spectral bases hold. We obtain PC modes by either orthogonal projection or Bayesian inference, and argue for a hybrid approach that targets a balance between the accuracy provided by the orthogonal projection and the flexibility provided by the Bayesian inference - where the latter allows obtaining reasonable expansions without extra forward model runs. The model output, and its associated uncertainty at specific design points, are then computed by taking an ensemble average over PC expansions corresponding to possible realizations of the discontinuity curve. The methodology is tested on synthetic examples of discontinuous model data with adjustable sharpness and structure. This work was supported by the Sandia National Laboratories Seniors’ Council LDRD (Laboratory Directed Research and Development) program. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.

  1. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling: GEOSTATISTICAL SENSITIVITY ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Chen, Xingyuan; Ye, Ming

    Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level ofmore » the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.« less

  2. Uncertainty in tsunami sediment transport modeling

    USGS Publications Warehouse

    Jaffe, Bruce E.; Goto, Kazuhisa; Sugawara, Daisuke; Gelfenbaum, Guy R.; La Selle, SeanPaul M.

    2016-01-01

    Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. We explore sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami, study site, available input data, sediment grain size, and model. Although uncertainty has the potential to be large, published case studies indicate that both forward and inverse tsunami sediment transport models perform well enough to be useful for deciphering tsunami characteristics, including size, from deposits. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and development of hybrid modeling approaches to exploit the strengths of forward and inverse models.

  3. Predicting acidification recovery at the Hubbard Brook Experimental Forest, New Hampshire: evaluation of four models.

    PubMed

    Tominaga, Koji; Aherne, Julian; Watmough, Shaun A; Alveteg, Mattias; Cosby, Bernard J; Driscoll, Charles T; Posch, Maximilian; Pourmokhtarian, Afshin

    2010-12-01

    The performance and prediction uncertainty (owing to parameter and structural uncertainties) of four dynamic watershed acidification models (MAGIC, PnET-BGC, SAFE, and VSD) were assessed by systematically applying them to data from the Hubbard Brook Experimental Forest (HBEF), New Hampshire, where long-term records of precipitation and stream chemistry were available. In order to facilitate systematic evaluation, Monte Carlo simulation was used to randomly generate common model input data sets (n = 10,000) from parameter distributions; input data were subsequently translated among models to retain consistency. The model simulations were objectively calibrated against observed data (streamwater: 1963-2004, soil: 1983). The ensemble of calibrated models was used to assess future response of soil and stream chemistry to reduced sulfur deposition at the HBEF. Although both hindcast (1850-1962) and forecast (2005-2100) predictions were qualitatively similar across the four models, the temporal pattern of key indicators of acidification recovery (stream acid neutralizing capacity and soil base saturation) differed substantially. The range in predictions resulted from differences in model structure and their associated posterior parameter distributions. These differences can be accommodated by employing multiple models (ensemble analysis) but have implications for individual model applications.

  4. Direct computation of stochastic flow in reservoirs with uncertain parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dainton, M.P.; Nichols, N.K.; Goldwater, M.H.

    1997-01-15

    A direct method is presented for determining the uncertainty in reservoir pressure, flow, and net present value (NPV) using the time-dependent, one phase, two- or three-dimensional equations of flow through a porous medium. The uncertainty in the solution is modelled as a probability distribution function and is computed from given statistical data for input parameters such as permeability. The method generates an expansion for the mean of the pressure about a deterministic solution to the system equations using a perturbation to the mean of the input parameters. Hierarchical equations that define approximations to the mean solution at each point andmore » to the field convariance of the pressure are developed and solved numerically. The procedure is then used to find the statistics of the flow and the risked value of the field, defined by the NPV, for a given development scenario. This method involves only one (albeit complicated) solution of the equations and contrasts with the more usual Monte-Carlo approach where many such solutions are required. The procedure is applied easily to other physical systems modelled by linear or nonlinear partial differential equations with uncertain data. 14 refs., 14 figs., 3 tabs.« less

  5. On the estimation of stellar parameters with uncertainty prediction from Generative Artificial Neural Networks: application to Gaia RVS simulated spectra

    NASA Astrophysics Data System (ADS)

    Dafonte, C.; Fustes, D.; Manteiga, M.; Garabato, D.; Álvarez, M. A.; Ulla, A.; Allende Prieto, C.

    2016-10-01

    Aims: We present an innovative artificial neural network (ANN) architecture, called Generative ANN (GANN), that computes the forward model, that is it learns the function that relates the unknown outputs (stellar atmospheric parameters, in this case) to the given inputs (spectra). Such a model can be integrated in a Bayesian framework to estimate the posterior distribution of the outputs. Methods: The architecture of the GANN follows the same scheme as a normal ANN, but with the inputs and outputs inverted. We train the network with the set of atmospheric parameters (Teff, log g, [Fe/H] and [α/ Fe]), obtaining the stellar spectra for such inputs. The residuals between the spectra in the grid and the estimated spectra are minimized using a validation dataset to keep solutions as general as possible. Results: The performance of both conventional ANNs and GANNs to estimate the stellar parameters as a function of the star brightness is presented and compared for different Galactic populations. GANNs provide significantly improved parameterizations for early and intermediate spectral types with rich and intermediate metallicities. The behaviour of both algorithms is very similar for our sample of late-type stars, obtaining residuals in the derivation of [Fe/H] and [α/ Fe] below 0.1 dex for stars with Gaia magnitude Grvs < 12, which accounts for a number in the order of four million stars to be observed by the Radial Velocity Spectrograph of the Gaia satellite. Conclusions: Uncertainty estimation of computed astrophysical parameters is crucial for the validation of the parameterization itself and for the subsequent exploitation by the astronomical community. GANNs produce not only the parameters for a given spectrum, but a goodness-of-fit between the observed spectrum and the predicted one for a given set of parameters. Moreover, they allow us to obtain the full posterior distribution over the astrophysical parameters space once a noise model is assumed. This can be used for novelty detection and quality assessment.

  6. SIMULATING RADIONUCLIDE FATE AND TRANSPORT IN THE UNSATURATED ZONE: EVALUATION AND SENSITIVITY ANALYSES OF SELECT COMPUTER MODELS

    EPA Science Inventory

    Numerical, mathematical models of water and chemical movement in soils are used as decision aids for determining soil screening levels (SSLs) of radionuclides in the unsaturated zone. Many models require extensive input parameters which include uncertainty due to soil variabil...

  7. Experimental and modeling uncertainties in the validation of lower hybrid current drive

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poli, F. M.; Bonoli, P. T.; Chilenski, M.

    Our work discusses sources of uncertainty in the validation of lower hybrid wave current drive simulations against experiments, by evolving self-consistently the magnetic equilibrium and the heating and current drive profiles, calculated with a combined toroidal ray tracing code and 3D Fokker–Planck solver. The simulations indicate a complex interplay of elements, where uncertainties in the input plasma parameters, in the models and in the transport solver combine and compensate each other, at times. It is concluded that ray-tracing calculations should include a realistic representation of the density and temperature in the region between the confined plasma and the wall, whichmore » is especially important in regimes where the LH waves are weakly damped and undergo multiple reflections from the plasma boundary. Uncertainties introduced in the processing of diagnostic data as well as uncertainties introduced by model approximations are assessed. We show that, by comparing the evolution of the plasma parameters in self-consistent simulations with available data, inconsistencies can be identified and limitations in the models or in the experimental data assessed.« less

  8. Experimental and modeling uncertainties in the validation of lower hybrid current drive

    DOE PAGES

    Poli, F. M.; Bonoli, P. T.; Chilenski, M.; ...

    2016-07-28

    Our work discusses sources of uncertainty in the validation of lower hybrid wave current drive simulations against experiments, by evolving self-consistently the magnetic equilibrium and the heating and current drive profiles, calculated with a combined toroidal ray tracing code and 3D Fokker–Planck solver. The simulations indicate a complex interplay of elements, where uncertainties in the input plasma parameters, in the models and in the transport solver combine and compensate each other, at times. It is concluded that ray-tracing calculations should include a realistic representation of the density and temperature in the region between the confined plasma and the wall, whichmore » is especially important in regimes where the LH waves are weakly damped and undergo multiple reflections from the plasma boundary. Uncertainties introduced in the processing of diagnostic data as well as uncertainties introduced by model approximations are assessed. We show that, by comparing the evolution of the plasma parameters in self-consistent simulations with available data, inconsistencies can be identified and limitations in the models or in the experimental data assessed.« less

  9. Uncertainty importance analysis using parametric moment ratio functions.

    PubMed

    Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen

    2014-02-01

    This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.

  10. Linking 1D coastal ocean modelling to environmental management: an ensemble approach

    NASA Astrophysics Data System (ADS)

    Mussap, Giulia; Zavatarelli, Marco; Pinardi, Nadia

    2017-12-01

    The use of a one-dimensional interdisciplinary numerical model of the coastal ocean as a tool contributing to the formulation of ecosystem-based management (EBM) is explored. The focus is on the definition of an experimental design based on ensemble simulations, integrating variability linked to scenarios (characterised by changes in the system forcing) and to the concurrent variation of selected, and poorly constrained, model parameters. The modelling system used was previously specifically designed for the use in "data-rich" areas, so that horizontal dynamics can be resolved by a diagnostic approach and external inputs can be parameterised by nudging schemes properly calibrated. Ensembles determined by changes in the simulated environmental (physical and biogeochemical) dynamics, under joint forcing and parameterisation variations, highlight the uncertainties associated to the application of specific scenarios that are relevant to EBM, providing an assessment of the reliability of the predicted changes. The work has been carried out by implementing the coupled modelling system BFM-POM1D in an area of Gulf of Trieste (northern Adriatic Sea), considered homogeneous from the point of view of hydrological properties, and forcing it by changing climatic (warming) and anthropogenic (reduction of the land-based nutrient input) pressure. Model parameters affected by considerable uncertainties (due to the lack of relevant observations) were varied jointly with the scenarios of change. The resulting large set of ensemble simulations provided a general estimation of the model uncertainties related to the joint variation of pressures and model parameters. The information of the model result variability aimed at conveying efficiently and comprehensibly the information on the uncertainties/reliability of the model results to non-technical EBM planners and stakeholders, in order to have the model-based information effectively contributing to EBM.

  11. Fuzzy simulation in concurrent engineering

    NASA Technical Reports Server (NTRS)

    Kraslawski, A.; Nystrom, L.

    1992-01-01

    Concurrent engineering is becoming a very important practice in manufacturing. A problem in concurrent engineering is the uncertainty associated with the values of the input variables and operating conditions. The problem discussed in this paper concerns the simulation of processes where the raw materials and the operational parameters possess fuzzy characteristics. The processing of fuzzy input information is performed by the vertex method and the commercial simulation packages POLYMATH and GEMS. The examples are presented to illustrate the usefulness of the method in the simulation of chemical engineering processes.

  12. Dealing with uncertainty in modeling intermittent water supply

    NASA Astrophysics Data System (ADS)

    Lieb, A. M.; Rycroft, C.; Wilkening, J.

    2015-12-01

    Intermittency in urban water supply affects hundreds of millions of people in cities around the world, impacting water quality and infrastructure. Building on previous work to dynamically model the transient flows in water distribution networks undergoing frequent filling and emptying, we now consider the hydraulic implications of uncertain input data. Water distribution networks undergoing intermittent supply are often poorly mapped, and household metering frequently ranges from patchy to nonexistent. In the face of uncertain pipe material, pipe slope, network connectivity, and outflow, we investigate how uncertainty affects dynamical modeling results. We furthermore identify which parameters exert the greatest influence on uncertainty, helping to prioritize data collection.

  13. LensEnt2: Maximum-entropy weak lens reconstruction

    NASA Astrophysics Data System (ADS)

    Marshall, P. J.; Hobson, M. P.; Gull, S. F.; Bridle, S. L.

    2013-08-01

    LensEnt2 is a maximum entropy reconstructor of weak lensing mass maps. The method takes each galaxy shape as an independent estimator of the reduced shear field and incorporates an intrinsic smoothness, determined by Bayesian methods, into the reconstruction. The uncertainties from both the intrinsic distribution of galaxy shapes and galaxy shape estimation are carried through to the final mass reconstruction, and the mass within arbitrarily shaped apertures are calculated with corresponding uncertainties. The input is a galaxy ellipticity catalog with each measured galaxy shape treated as a noisy tracer of the reduced shear field, which is inferred on a fine pixel grid assuming positivity, and smoothness on scales of w arcsec where w is an input parameter. The ICF width w can be chosen by computing the evidence for it.

  14. Multiscale contact mechanics model for RF-MEMS switches with quantified uncertainties

    NASA Astrophysics Data System (ADS)

    Kim, Hojin; Huda Shaik, Nurul; Xu, Xin; Raman, Arvind; Strachan, Alejandro

    2013-12-01

    We introduce a multiscale model for contact mechanics between rough surfaces and apply it to characterize the force-displacement relationship for a metal-dielectric contact relevant for radio frequency micro-electromechanicl system (MEMS) switches. We propose a mesoscale model to describe the history-dependent force-displacement relationships in terms of the surface roughness, the long-range attractive interaction between the two surfaces, and the repulsive interaction between contacting asperities (including elastic and plastic deformation). The inputs to this model are the experimentally determined surface topography and the Hamaker constant as well as the mechanical response of individual asperities obtained from density functional theory calculations and large-scale molecular dynamics simulations. The model captures non-trivial processes including the hysteresis during loading and unloading due to plastic deformation, yet it is computationally efficient enough to enable extensive uncertainty quantification and sensitivity analysis. We quantify how uncertainties and variability in the input parameters, both experimental and theoretical, affect the force-displacement curves during approach and retraction. In addition, a sensitivity analysis quantifies the relative importance of the various input quantities for the prediction of force-displacement during contact closing and opening. The resulting force-displacement curves with quantified uncertainties can be directly used in device-level simulations of micro-switches and enable the incorporation of atomic and mesoscale phenomena in predictive device-scale simulations.

  15. Dealing with uncertainties in environmental burden of disease assessment

    PubMed Central

    2009-01-01

    Disability Adjusted Life Years (DALYs) combine the number of people affected by disease or mortality in a population and the duration and severity of their condition into one number. The environmental burden of disease is the number of DALYs that can be attributed to environmental factors. Environmental burden of disease estimates enable policy makers to evaluate, compare and prioritize dissimilar environmental health problems or interventions. These estimates often have various uncertainties and assumptions which are not always made explicit. Besides statistical uncertainty in input data and parameters – which is commonly addressed – a variety of other types of uncertainties may substantially influence the results of the assessment. We have reviewed how different types of uncertainties affect environmental burden of disease assessments, and we give suggestions as to how researchers could address these uncertainties. We propose the use of an uncertainty typology to identify and characterize uncertainties. Finally, we argue that uncertainties need to be identified, assessed, reported and interpreted in order for assessment results to adequately support decision making. PMID:19400963

  16. Quantifying sampling noise and parametric uncertainty in atomistic-to-continuum simulations using surrogate models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salloum, Maher N.; Sargsyan, Khachik; Jones, Reese E.

    2015-08-11

    We present a methodology to assess the predictive fidelity of multiscale simulations by incorporating uncertainty in the information exchanged between the components of an atomistic-to-continuum simulation. We account for both the uncertainty due to finite sampling in molecular dynamics (MD) simulations and the uncertainty in the physical parameters of the model. Using Bayesian inference, we represent the expensive atomistic component by a surrogate model that relates the long-term output of the atomistic simulation to its uncertain inputs. We then present algorithms to solve for the variables exchanged across the atomistic-continuum interface in terms of polynomial chaos expansions (PCEs). We alsomore » consider a simple Couette flow where velocities are exchanged between the atomistic and continuum components, while accounting for uncertainty in the atomistic model parameters and the continuum boundary conditions. Results show convergence of the coupling algorithm at a reasonable number of iterations. As a result, the uncertainty in the obtained variables significantly depends on the amount of data sampled from the MD simulations and on the width of the time averaging window used in the MD simulations.« less

  17. Failure Bounding And Sensitivity Analysis Applied To Monte Carlo Entry, Descent, And Landing Simulations

    NASA Technical Reports Server (NTRS)

    Gaebler, John A.; Tolson, Robert H.

    2010-01-01

    In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.

  18. Influence of Population Variation of Physiological Parameters in Computational Models of Space Physiology

    NASA Technical Reports Server (NTRS)

    Myers, J. G.; Feola, A.; Werner, C.; Nelson, E. S.; Raykin, J.; Samuels, B.; Ethier, C. R.

    2016-01-01

    The earliest manifestations of Visual Impairment and Intracranial Pressure (VIIP) syndrome become evident after months of spaceflight and include a variety of ophthalmic changes, including posterior globe flattening and distension of the optic nerve sheath. Prevailing evidence links the occurrence of VIIP to the cephalic fluid shift induced by microgravity and the subsequent pressure changes around the optic nerve and eye. Deducing the etiology of VIIP is challenging due to the wide range of physiological parameters that may be influenced by spaceflight and are required to address a realistic spectrum of physiological responses. Here, we report on the application of an efficient approach to interrogating physiological parameter space through computational modeling. Specifically, we assess the influence of uncertainty in input parameters for two models of VIIP syndrome: a lumped-parameter model (LPM) of the cardiovascular and central nervous systems, and a finite-element model (FEM) of the posterior eye, optic nerve head (ONH) and optic nerve sheath. Methods: To investigate the parameter space in each model, we employed Latin hypercube sampling partial rank correlation coefficient (LHSPRCC) strategies. LHS techniques outperform Monte Carlo approaches by enforcing efficient sampling across the entire range of all parameters. The PRCC method estimates the sensitivity of model outputs to these parameters while adjusting for the linear effects of all other inputs. The LPM analysis addressed uncertainties in 42 physiological parameters, such as initial compartmental volume and nominal compartment percentage of total cardiac output in the supine state, while the FEM evaluated the effects on biomechanical strain from uncertainties in 23 material and pressure parameters for the ocular anatomy. Results and Conclusion: The LPM analysis identified several key factors including high sensitivity to the initial fluid distribution. The FEM study found that intraocular pressure and intracranial pressure had dominant impact on the peak strains in the ONH and retro-laminar optic nerve, respectively; optic nerve and lamina cribrosa stiffness were also important. This investigation illustrates the ability of LHSPRCC to identify the most influential physiological parameters, which must therefore be well-characterized to produce the most accurate numerical results.

  19. Particle Dark Matter constraints: the effect of Galactic uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benito, Maria; Bernal, Nicolás; Iocco, Fabio

    2017-02-01

    Collider, space, and Earth based experiments are now able to probe several extensions of the Standard Model of particle physics which provide viable dark matter candidates. Direct and indirect dark matter searches rely on inputs of astrophysical nature, such as the local dark matter density or the shape of the dark matter density profile in the target in object. The determination of these quantities is highly affected by astrophysical uncertainties. The latter, especially those for our own Galaxy, are ill-known, and often not fully accounted for when analyzing the phenomenology of particle physics models. In this paper we present amore » systematic, quantitative estimate of how astrophysical uncertainties on Galactic quantities (such as the local galactocentric distance, circular velocity, or the morphology of the stellar disk and bulge) propagate to the determination of the phenomenology of particle physics models, thus eventually affecting the determination of new physics parameters. We present results in the context of two specific extensions of the Standard Model (the Singlet Scalar and the Inert Doublet) that we adopt as case studies for their simplicity in illustrating the magnitude and impact of such uncertainties on the parameter space of the particle physics model itself. Our findings point toward very relevant effects of current Galactic uncertainties on the determination of particle physics parameters, and urge a systematic estimate of such uncertainties in more complex scenarios, in order to achieve constraints on the determination of new physics that realistically include all known uncertainties.« less

  20. Stochastic collocation using Kronrod-Patterson-Hermite quadrature with moderate delay for subsurface flow and transport

    NASA Astrophysics Data System (ADS)

    Liao, Q.; Tchelepi, H.; Zhang, D.

    2015-12-01

    Uncertainty quantification aims at characterizing the impact of input parameters on the output responses and plays an important role in many areas including subsurface flow and transport. In this study, a sparse grid collocation approach, which uses a nested Kronrod-Patterson-Hermite quadrature rule with moderate delay for Gaussian random parameters, is proposed to quantify the uncertainty of model solutions. The conventional stochastic collocation method serves as a promising non-intrusive approach and has drawn a great deal of interests. The collocation points are usually chosen to be Gauss-Hermite quadrature nodes, which are naturally unnested. The Kronrod-Patterson-Hermite nodes are shown to be more efficient than the Gauss-Hermite nodes due to nestedness. We propose a Kronrod-Patterson-Hermite rule with moderate delay to further improve the performance. Our study demonstrates the effectiveness of the proposed method for uncertainty quantification through subsurface flow and transport examples.

  1. Uncertainty quantification tools for multiphase gas-solid flow simulations using MFIX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, Rodney O.; Passalacqua, Alberto

    2016-02-01

    Computational fluid dynamics (CFD) has been widely studied and used in the scientific community and in the industry. Various models were proposed to solve problems in different areas. However, all models deviate from reality. Uncertainty quantification (UQ) process evaluates the overall uncertainties associated with the prediction of quantities of interest. In particular it studies the propagation of input uncertainties to the outputs of the models so that confidence intervals can be provided for the simulation results. In the present work, a non-intrusive quadrature-based uncertainty quantification (QBUQ) approach is proposed. The probability distribution function (PDF) of the system response can bemore » then reconstructed using extended quadrature method of moments (EQMOM) and extended conditional quadrature method of moments (ECQMOM). The report first explains the theory of QBUQ approach, including methods to generate samples for problems with single or multiple uncertain input parameters, low order statistics, and required number of samples. Then methods for univariate PDF reconstruction (EQMOM) and multivariate PDF reconstruction (ECQMOM) are explained. The implementation of QBUQ approach into the open-source CFD code MFIX is discussed next. At last, QBUQ approach is demonstrated in several applications. The method is first applied to two examples: a developing flow in a channel with uncertain viscosity, and an oblique shock problem with uncertain upstream Mach number. The error in the prediction of the moment response is studied as a function of the number of samples, and the accuracy of the moments required to reconstruct the PDF of the system response is discussed. The QBUQ approach is then demonstrated by considering a bubbling fluidized bed as example application. The mean particle size is assumed to be the uncertain input parameter. The system is simulated with a standard two-fluid model with kinetic theory closures for the particulate phase implemented into MFIX. The effect of uncertainty on the disperse-phase volume fraction, on the phase velocities and on the pressure drop inside the fluidized bed are examined, and the reconstructed PDFs are provided for the three quantities studied. Then the approach is applied to a bubbling fluidized bed with two uncertain parameters, particle-particle and particle-wall restitution coefficients. Contour plots of the mean and standard deviation of solid volume fraction, solid phase velocities and gas pressure are provided. The PDFs of the response are reconstructed using EQMOM with appropriate kernel density functions. The simulation results are compared to experimental data provided by the 2013 NETL small-scale challenge problem. Lastly, the proposed procedure is demonstrated by considering a riser of a circulating fluidized bed as an example application. The mean particle size is considered to be the uncertain input parameter. Contour plots of the mean and standard deviation of solid volume fraction, solid phase velocities, and granular temperature are provided. Mean values and confidence intervals of the quantities of interest are compared to the experiment results. The univariate and bivariate PDF reconstructions of the system response are performed using EQMOM and ECQMOM.« less

  2. Computational solution verification and validation applied to a thermal model of a ruggedized instrumentation package

    DOE PAGES

    Scott, Sarah Nicole; Templeton, Jeremy Alan; Hough, Patricia Diane; ...

    2014-01-01

    This study details a methodology for quantification of errors and uncertainties of a finite element heat transfer model applied to a Ruggedized Instrumentation Package (RIP). The proposed verification and validation (V&V) process includes solution verification to examine errors associated with the code's solution techniques, and model validation to assess the model's predictive capability for quantities of interest. The model was subjected to mesh resolution and numerical parameters sensitivity studies to determine reasonable parameter values and to understand how they change the overall model response and performance criteria. To facilitate quantification of the uncertainty associated with the mesh, automatic meshing andmore » mesh refining/coarsening algorithms were created and implemented on the complex geometry of the RIP. Automated software to vary model inputs was also developed to determine the solution’s sensitivity to numerical and physical parameters. The model was compared with an experiment to demonstrate its accuracy and determine the importance of both modelled and unmodelled physics in quantifying the results' uncertainty. An emphasis is placed on automating the V&V process to enable uncertainty quantification within tight development schedules.« less

  3. Temporal rainfall estimation using input data reduction and model inversion

    NASA Astrophysics Data System (ADS)

    Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.

    2016-12-01

    Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.

  4. Addressing the impact of environmental uncertainty in plankton model calibration with a dedicated software system: the Marine Model Optimization Testbed (MarMOT 1.1 alpha)

    NASA Astrophysics Data System (ADS)

    Hemmings, J. C. P.; Challenor, P. G.

    2012-04-01

    A wide variety of different plankton system models have been coupled with ocean circulation models, with the aim of understanding and predicting aspects of environmental change. However, an ability to make reliable inferences about real-world processes from the model behaviour demands a quantitative understanding of model error that remains elusive. Assessment of coupled model output is inhibited by relatively limited observing system coverage of biogeochemical components. Any direct assessment of the plankton model is further inhibited by uncertainty in the physical state. Furthermore, comparative evaluation of plankton models on the basis of their design is inhibited by the sensitivity of their dynamics to many adjustable parameters. Parameter uncertainty has been widely addressed by calibrating models at data-rich ocean sites. However, relatively little attention has been given to quantifying uncertainty in the physical fields required by the plankton models at these sites, and tendencies in the biogeochemical properties due to the effects of horizontal processes are often neglected. Here we use model twin experiments, in which synthetic data are assimilated to estimate a system's known "true" parameters, to investigate the impact of error in a plankton model's environmental input data. The experiments are supported by a new software tool, the Marine Model Optimization Testbed, designed for rigorous analysis of plankton models in a multi-site 1-D framework. Simulated errors are derived from statistical characterizations of the mixed layer depth, the horizontal flux divergence tendencies of the biogeochemical tracers and the initial state. Plausible patterns of uncertainty in these data are shown to produce strong temporal and spatial variability in the expected simulation error variance over an annual cycle, indicating variation in the significance attributable to individual model-data differences. An inverse scheme using ensemble-based estimates of the simulation error variance to allow for this environment error performs well compared with weighting schemes used in previous calibration studies, giving improved estimates of the known parameters. The efficacy of the new scheme in real-world applications will depend on the quality of statistical characterizations of the input data. Practical approaches towards developing reliable characterizations are discussed.

  5. Can Simulation Credibility Be Improved Using Sensitivity Analysis to Understand Input Data Effects on Model Outcome?

    NASA Technical Reports Server (NTRS)

    Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.

    2015-01-01

    Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.

  6. TSUNAMI Primer: A Primer for Sensitivity/Uncertainty Calculations with SCALE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Mueller, Don; Bowman, Stephen M

    2009-01-01

    This primer presents examples in the application of the SCALE/TSUNAMI tools to generate k{sub eff} sensitivity data for one- and three-dimensional models using TSUNAMI-1D and -3D and to examine uncertainties in the computed k{sub eff} values due to uncertainties in the cross-section data used in their calculation. The proper use of unit cell data and need for confirming the appropriate selection of input parameters through direct perturbations are described. The uses of sensitivity and uncertainty data to identify and rank potential sources of computational bias in an application system and TSUNAMI tools for assessment of system similarity using sensitivity andmore » uncertainty criteria are demonstrated. Uses of these criteria in trending analyses to assess computational biases, bias uncertainties, and gap analyses are also described. Additionally, an application of the data adjustment tool TSURFER is provided, including identification of specific details of sources of computational bias.« less

  7. On how to avoid input and structural uncertainties corrupt the inference of hydrological parameters using a Bayesian framework

    NASA Astrophysics Data System (ADS)

    Hernández, Mario R.; Francés, Félix

    2015-04-01

    One phase of the hydrological models implementation process, significantly contributing to the hydrological predictions uncertainty, is the calibration phase in which values of the unknown model parameters are tuned by optimizing an objective function. An unsuitable error model (e.g. Standard Least Squares or SLS) introduces noise into the estimation of the parameters. The main sources of this noise are the input errors and the hydrological model structural deficiencies. Thus, the biased calibrated parameters cause the divergence model phenomenon, where the errors variance of the (spatially and temporally) forecasted flows far exceeds the errors variance in the fitting period, and provoke the loss of part or all of the physical meaning of the modeled processes. In other words, yielding a calibrated hydrological model which works well, but not for the right reasons. Besides, an unsuitable error model yields a non-reliable predictive uncertainty assessment. Hence, with the aim of prevent all these undesirable effects, this research focuses on the Bayesian joint inference (BJI) of both the hydrological and error model parameters, considering a general additive (GA) error model that allows for correlation, non-stationarity (in variance and bias) and non-normality of model residuals. As hydrological model, it has been used a conceptual distributed model called TETIS, with a particular split structure of the effective model parameters. Bayesian inference has been performed with the aid of a Markov Chain Monte Carlo (MCMC) algorithm called Dream-ZS. MCMC algorithm quantifies the uncertainty of the hydrological and error model parameters by getting the joint posterior probability distribution, conditioned on the observed flows. The BJI methodology is a very powerful and reliable tool, but it must be used correctly this is, if non-stationarity in errors variance and bias is modeled, the Total Laws must be taken into account. The results of this research show that the application of BJI with a GA error model outperforms the hydrological parameters robustness (diminishing the divergence model phenomenon) and improves the reliability of the streamflow predictive distribution, in respect of the results of a bad error model as SLS. Finally, the most likely prediction in a validation period, for both BJI+GA and SLS error models shows a similar performance.

  8. iTOUGH2 Universal Optimization Using the PEST Protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, S.A.

    2010-07-01

    iTOUGH2 (http://www-esd.lbl.gov/iTOUGH2) is a computer program for parameter estimation, sensitivity analysis, and uncertainty propagation analysis [Finsterle, 2007a, b, c]. iTOUGH2 contains a number of local and global minimization algorithms for automatic calibration of a model against measured data, or for the solution of other, more general optimization problems (see, for example, Finsterle [2005]). A detailed residual and estimation uncertainty analysis is conducted to assess the inversion results. Moreover, iTOUGH2 can be used to perform a formal sensitivity analysis, or to conduct Monte Carlo simulations for the examination for prediction uncertainties. iTOUGH2's capabilities are continually enhanced. As the name implies, iTOUGH2more » is developed for use in conjunction with the TOUGH2 forward simulator for nonisothermal multiphase flow in porous and fractured media [Pruess, 1991]. However, iTOUGH2 provides FORTRAN interfaces for the estimation of user-specified parameters (see subroutine USERPAR) based on user-specified observations (see subroutine USEROBS). These user interfaces can be invoked to add new parameter or observation types to the standard set provided in iTOUGH2. They can also be linked to non-TOUGH2 models, i.e., iTOUGH2 can be used as a universal optimization code, similar to other model-independent, nonlinear parameter estimation packages such as PEST [Doherty, 2008] or UCODE [Poeter and Hill, 1998]. However, to make iTOUGH2's optimization capabilities available for use with an external code, the user is required to write some FORTRAN code that provides the link between the iTOUGH2 parameter vector and the input parameters of the external code, and between the output variables of the external code and the iTOUGH2 observation vector. While allowing for maximum flexibility, the coding requirement of this approach limits its applicability to those users with FORTRAN coding knowledge. To make iTOUGH2 capabilities accessible to many application models, the PEST protocol [Doherty, 2007] has been implemented into iTOUGH2. This protocol enables communication between the application (which can be a single 'black-box' executable or a script or batch file that calls multiple codes) and iTOUGH2. The concept requires that for the application model: (1) Input is provided on one or more ASCII text input files; (2) Output is returned to one or more ASCII text output files; (3) The model is run using a system command (executable or script/batch file); and (4) The model runs to completion without any user intervention. For each forward run invoked by iTOUGH2, select parameters cited within the application model input files are then overwritten with values provided by iTOUGH2, and select variables cited within the output files are extracted and returned to iTOUGH2. It should be noted that the core of iTOUGH2, i.e., its optimization routines and related analysis tools, remains unchanged; it is only the communication format between input parameters, the application model, and output variables that are borrowed from PEST. The interface routines have been provided by Doherty [2007]. The iTOUGH2-PEST architecture is shown in Figure 1. This manual contains installation instructions for the iTOUGH2-PEST module, and describes the PEST protocol as well as the input formats needed in iTOUGH2. Examples are provided that demonstrate the use of model-independent optimization and analysis using iTOUGH2.« less

  9. Assimilating Leaf Area Index Estimates from Remote Sensing into the Simulations of a Cropping Systems Model

    USDA-ARS?s Scientific Manuscript database

    Spatial extrapolation of cropping systems models for regional crop growth and water use assessment and farm-level precision management has been limited by the vast model input requirements and the model sensitivity to parameter uncertainty. Remote sensing has been proposed as a viable source of spat...

  10. Real-air data reduction procedures based on flow parameters measured in the test section of supersonic and hypersonic facilities

    NASA Technical Reports Server (NTRS)

    Miller, C. G., III; Wilder, S. E.

    1972-01-01

    Data-reduction procedures for determining free stream and post-normal shock kinetic and thermodynamic quantities are derived. These procedures are applicable to imperfect real air flows in thermochemical equilibrium for temperatures to 15 000 K and a range of pressures from 0.25 N/sq m to 1 GN/sq m. Although derived primarily to meet the immediate needs of the 6-inch expansion tube, these procedures are applicable to any supersonic or hypersonic test facility where combinations of three of the following flow parameters are measured in the test section: (1) Stagnation pressure behind normal shock; (2) freestream static pressure; (3) stagnation point heat transfer rate; (4) free stream velocity; (5) stagnation density behind normal shock; and (6) free stream density. Limitations of the nine procedures and uncertainties in calculated flow quantities corresponding to uncertainties in measured input data are discussed. A listing of the computer program is presented, along with a description of the inputs required and a sample of the data printout.

  11. Quantifying and managing uncertainty in operational modal analysis

    NASA Astrophysics Data System (ADS)

    Au, Siu-Kui; Brownjohn, James M. W.; Mottershead, John E.

    2018-03-01

    Operational modal analysis aims at identifying the modal properties (natural frequency, damping, etc.) of a structure using only the (output) vibration response measured under ambient conditions. Highly economical and feasible, it is becoming a common practice in full-scale vibration testing. In the absence of (input) loading information, however, the modal properties have significantly higher uncertainty than their counterparts identified from free or forced vibration (known input) tests. Mastering the relationship between identification uncertainty and test configuration is of great interest to both scientists and engineers, e.g., for achievable precision limits and test planning/budgeting. Addressing this challenge beyond the current state-of-the-art that are mostly concerned with identification algorithms, this work obtains closed form analytical expressions for the identification uncertainty (variance) of modal parameters that fundamentally explains the effect of test configuration. Collectively referred as 'uncertainty laws', these expressions are asymptotically correct for well-separated modes, small damping and long data; and are applicable under non-asymptotic situations. They provide a scientific basis for planning and standardization of ambient vibration tests, where factors such as channel noise, sensor number and location can be quantitatively accounted for. The work is reported comprehensively with verification through synthetic and experimental data (laboratory and field), scientific implications and practical guidelines for planning ambient vibration tests.

  12. NEUTRON STAR MASS–RADIUS CONSTRAINTS USING EVOLUTIONARY OPTIMIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, A. L.; Morsink, S. M.; Fiege, J. D.

    The equation of state of cold supra-nuclear-density matter, such as in neutron stars, is an open question in astrophysics. A promising method for constraining the neutron star equation of state is modeling pulse profiles of thermonuclear X-ray burst oscillations from hot spots on accreting neutron stars. The pulse profiles, constructed using spherical and oblate neutron star models, are comparable to what would be observed by a next-generation X-ray timing instrument like ASTROSAT , NICER , or a mission similar to LOFT . In this paper, we showcase the use of an evolutionary optimization algorithm to fit pulse profiles to determinemore » the best-fit masses and radii. By fitting synthetic data, we assess how well the optimization algorithm can recover the input parameters. Multiple Poisson realizations of the synthetic pulse profiles, constructed with 1.6 million counts and no background, were fitted with the Ferret algorithm to analyze both statistical and degeneracy-related uncertainty and to explore how the goodness of fit depends on the input parameters. For the regions of parameter space sampled by our tests, the best-determined parameter is the projected velocity of the spot along the observer’s line of sight, with an accuracy of ≤3% compared to the true value and with ≤5% statistical uncertainty. The next best determined are the mass and radius; for a neutron star with a spin frequency of 600 Hz, the best-fit mass and radius are accurate to ≤5%, with respective uncertainties of ≤7% and ≤10%. The accuracy and precision depend on the observer inclination and spot colatitude, with values of ∼1% achievable in mass and radius if both the inclination and colatitude are ≳60°.« less

  13. Quantification of uncertainties in the tsunami hazard for Cascadia using statistical emulation

    NASA Astrophysics Data System (ADS)

    Guillas, S.; Day, S. J.; Joakim, B.

    2016-12-01

    We present new high resolution tsunami wave propagation and coastal inundation for the Cascadia region in the Pacific Northwest. The coseismic representation in this analysis is novel, and more realistic than in previous studies, as we jointly parametrize multiple aspects of the seabed deformation. Due to the large computational cost of such simulators, statistical emulation is required in order to carry out uncertainty quantification tasks, as emulators efficiently approximate simulators. The emulator replaces the tsunami model VOLNA by a fast surrogate, so we are able to efficiently propagate uncertainties from the source characteristics to wave heights, in order to probabilistically assess tsunami hazard for Cascadia. We employ a new method for the design of the computer experiments in order to reduce the number of runs while maintaining good approximations properties of the emulator. Out of the initial nine parameters, mostly describing the geometry and time variation of the seabed deformation, we drop two parameters since these turn out to not have an influence on the resulting tsunami waves at the coast. We model the impact of another parameter linearly as its influence on the wave heights is identified as linear. We combine this screening approach with the sequential design algorithm MICE (Mutual Information for Computer Experiments), that adaptively selects the input values at which to run the computer simulator, in order to maximize the expected information gain (mutual information) over the input space. As a result, the emulation is made possible and accurate. Starting from distributions of the source parameters that encapsulate geophysical knowledge of the possible source characteristics, we derive distributions of the tsunami wave heights along the coastline.

  14. The significance of spatial variability of rainfall on streamflow: A synthetic analysis at the Upper Lee catchment, UK

    NASA Astrophysics Data System (ADS)

    Pechlivanidis, Ilias; McIntyre, Neil; Wheater, Howard

    2017-04-01

    Rainfall, one of the main inputs in hydrological modeling, is a highly heterogeneous process over a wide range of scales in space, and hence the ignorance of the spatial rainfall information could affect the simulated streamflow. Calibration of hydrological model parameters is rarely a straightforward task due to parameter equifinality and parameters' 'nature' to compensate for other uncertainties, i.e. structural and forcing input. In here, we analyse the significance of spatial variability of rainfall on streamflow as a function of catchment scale and type, and antecedent conditions using the continuous time, semi-distributed PDM hydrological model at the Upper Lee catchment, UK. The impact of catchment scale and type is assessed using 11 nested catchments ranging in scale from 25 to 1040 km2, and further assessed by artificially changing the catchment characteristics and translating these to model parameters with uncertainty using model regionalisation. Synthetic rainfall events are introduced to directly relate the change in simulated streamflow to the spatial variability of rainfall. Overall, we conclude that the antecedent catchment wetness and catchment type play an important role in controlling the significance of the spatial distribution of rainfall on streamflow. Results show a relationship between hydrograph characteristics (streamflow peak and volume) and the degree of spatial variability of rainfall for the impermeable catchments under dry antecedent conditions, although this decreases at larger scales; however this sensitivity is significantly undermined under wet antecedent conditions. Although there is indication that the impact of spatial rainfall on streamflow varies as a function of catchment scale, the variability of antecedent conditions between the synthetic catchments seems to mask this significance. Finally, hydrograph responses to different spatial patterns in rainfall depend on assumptions used for model parameter estimation and also the spatial variation in parameters indicating the need of an uncertainty framework in such investigation.

  15. Entry dynamics of space shuttle orbiter with longitudinal stability and control uncertainties at supersonic and hypersonic speeds

    NASA Technical Reports Server (NTRS)

    Stone, H. W.; Powell, R. W.

    1977-01-01

    A six-degree-of-freedom simulation analysis was conducted to examine the effects of longitudinal static aerodynamic stability and control uncertainties on the performance of the space shuttle orbiter automatic (no manual inputs) entry guidance and control systems. To establish the acceptable boundaries, the static aerodynamic characteristics were varied either by applying a multiplier to the aerodynamic parameter or by adding an increment. With either of two previously identified control system modifications included, the acceptable longitudinal aerodynamic boundaries were determined.

  16. A surrogate-based sensitivity quantification and Bayesian inversion of a regional groundwater flow model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.; Amerjeed, Mansoor

    2018-02-01

    Bayesian inference using Markov Chain Monte Carlo (MCMC) provides an explicit framework for stochastic calibration of hydrogeologic models accounting for uncertainties; however, the MCMC sampling entails a large number of model calls, and could easily become computationally unwieldy if the high-fidelity hydrogeologic model simulation is time consuming. This study proposes a surrogate-based Bayesian framework to address this notorious issue, and illustrates the methodology by inverse modeling a regional MODFLOW model. The high-fidelity groundwater model is approximated by a fast statistical model using Bagging Multivariate Adaptive Regression Spline (BMARS) algorithm, and hence the MCMC sampling can be efficiently performed. In this study, the MODFLOW model is developed to simulate the groundwater flow in an arid region of Oman consisting of mountain-coast aquifers, and used to run representative simulations to generate training dataset for BMARS model construction. A BMARS-based Sobol' method is also employed to efficiently calculate input parameter sensitivities, which are used to evaluate and rank their importance for the groundwater flow model system. According to sensitivity analysis, insensitive parameters are screened out of Bayesian inversion of the MODFLOW model, further saving computing efforts. The posterior probability distribution of input parameters is efficiently inferred from the prescribed prior distribution using observed head data, demonstrating that the presented BMARS-based Bayesian framework is an efficient tool to reduce parameter uncertainties of a groundwater system.

  17. Are head-to-head trials of biologics needed? The role of value of information methods in arthritis research.

    PubMed

    Welton, Nicky J; Madan, Jason; Ades, Anthony E

    2011-09-01

    Reimbursement decisions are typically based on cost-effectiveness analyses. While a cost-effectiveness analysis can identify the optimum strategy, there is usually some degree of uncertainty around this decision. Sources of uncertainty include statistical sampling error in treatment efficacy measures, underlying baseline risk, utility measures and costs, as well as uncertainty in the structure of the model. The optimal strategy is therefore only optimal on average, and a decision to adopt this strategy might still be the wrong decision if all uncertainty could be eliminated. This means that there is a quantifiable expected (average) loss attaching to decisions made under uncertainty, and hence a value in collecting information to reduce that uncertainty. Value of information (VOI) analyses can be used to provide guidance on whether more research would be cost-effective, which particular model inputs (parameters) have the most bearing on decision uncertainty, and can also help with the design and sample size of further research. Here, we introduce the key concepts in VOI analyses, and highlight the inputs required to calculate it. The adoption of the new biologic treatments for RA and PsA tends to be based on placebo-controlled trials. We discuss the possible role of VOI analyses in deciding whether head-to-head comparisons of the biologic therapies should be carried out, illustrating with examples from other fields. We emphasize the need for a model of the natural history of RA and PsA, which reflects a consensus view.

  18. Robust control of the DC-DC boost converter based on the uncertainty and disturbance estimator

    NASA Astrophysics Data System (ADS)

    Oucheriah, Said

    2017-11-01

    In this paper, a robust non-linear controller based on the uncertainty and disturbance estimator (UDE) scheme is successfully developed and implemented for the output voltage regulation of the DC-DC boost converter. System uncertainties, external disturbances and unknown non-linear dynamics are lumped as a signal that is accurately estimated using a low-pass filter and their effects are cancelled by the controller. This methodology forms the basis of the UDE-based controller. A simple procedure is also developed that systematically determines the parameters of the controller to meet certain specifications. Using simulation, the effectiveness of the proposed controller is compared against the sliding-mode control (SMC). Experimental tests also show that the proposed controller is robust to system uncertainties, large input and load perturbations.

  19. The significance of parameter uncertainties for the prediction of offshore pile driving noise.

    PubMed

    Lippert, Tristan; von Estorff, Otto

    2014-11-01

    Due to the construction of offshore wind farms and its potential effect on marine wildlife, the numerical prediction of pile driving noise over long ranges has recently gained importance. In this contribution, a coupled finite element/wavenumber integration model for noise prediction is presented and validated by measurements. The ocean environment, especially the sea bottom, can only be characterized with limited accuracy in terms of input parameters for the numerical model at hand. Therefore the effect of these parameter uncertainties on the prediction of sound pressure levels (SPLs) in the water column is investigated by a probabilistic approach. In fact, a variation of the bottom material parameters by means of Monte-Carlo simulations shows significant effects on the predicted SPLs. A sensitivity analysis of the model with respect to the single quantities is performed, as well as a global variation. Based on the latter, the probability distribution of the SPLs at an exemplary receiver position is evaluated and compared to measurements. The aim of this procedure is to develop a model to reliably predict an interval for the SPLs, by quantifying the degree of uncertainty of the SPLs with the MC simulations.

  20. Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN

    NASA Astrophysics Data System (ADS)

    Talbot, Paul W.

    As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.

  1. Robust input design for nonlinear dynamic modeling of AUV.

    PubMed

    Nouri, Nowrouz Mohammad; Valadi, Mehrdad

    2017-09-01

    Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems

    NASA Technical Reports Server (NTRS)

    Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.

    2005-01-01

    The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.

  3. Uranium dioxide fuel cladding strain investigation with the use of CYGRO-2 computer program

    NASA Technical Reports Server (NTRS)

    Smith, J. R.

    1973-01-01

    Previously irradiated UO2 thermionic fuel pins in which gross fuel-cladding strain occurred were modeled with the use of a computer program to define controlling parameters which may contribute to cladding strain. The computed strain was compared with measured strain, and the computer input data were studied in an attempt to get agreement with measured strain. Because of the limitations of the program and uncertainties in input data, good agreement with measured cladding strain was not attained. A discussion of these limitations is presented.

  4. Characterization and Uncertainty Analysis of a Reference Pressure Measurement System for Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Amer, Tahani; Tripp, John; Tcheng, Ping; Burkett, Cecil; Sealey, Bradley

    2004-01-01

    This paper presents the calibration results and uncertainty analysis of a high-precision reference pressure measurement system currently used in wind tunnels at the NASA Langley Research Center (LaRC). Sensors, calibration standards, and measurement instruments are subject to errors due to aging, drift with time, environment effects, transportation, the mathematical model, the calibration experimental design, and other factors. Errors occur at every link in the chain of measurements and data reduction from the sensor to the final computed results. At each link of the chain, bias and precision uncertainties must be separately estimated for facility use, and are combined to produce overall calibration and prediction confidence intervals for the instrument, typically at a 95% confidence level. The uncertainty analysis and calibration experimental designs used herein, based on techniques developed at LaRC, employ replicated experimental designs for efficiency, separate estimation of bias and precision uncertainties, and detection of significant parameter drift with time. Final results, including calibration confidence intervals and prediction intervals given as functions of the applied inputs, not as a fixed percentage of the full-scale value are presented. System uncertainties are propagated beginning with the initial reference pressure standard, to the calibrated instrument as a working standard in the facility. Among the several parameters that can affect the overall results are operating temperature, atmospheric pressure, humidity, and facility vibration. Effects of factors such as initial zeroing and temperature are investigated. The effects of the identified parameters on system performance and accuracy are discussed.

  5. Reliability of a new biokinetic model of zirconium in internal dosimetry: part I, parameter uncertainty analysis.

    PubMed

    Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph

    2011-12-01

    The reliability of biokinetic models is essential in internal dose assessments and radiation risk analysis for the public, occupational workers, and patients exposed to radionuclides. In this paper, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. The paper is divided into two parts. In the first part of the study published here, the uncertainty sources of the model parameters for zirconium (Zr), developed by the International Commission on Radiological Protection (ICRP), were identified and analyzed. Furthermore, the uncertainty of the biokinetic experimental measurement performed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU) for developing a new biokinetic model of Zr was analyzed according to the Guide to the Expression of Uncertainty in Measurement, published by the International Organization for Standardization. The confidence interval and distribution of model parameters of the ICRP and HMGU Zr biokinetic models were evaluated. As a result of computer biokinetic modelings, the mean, standard uncertainty, and confidence interval of model prediction calculated based on the model parameter uncertainty were presented and compared to the plasma clearance and urinary excretion measured after intravenous administration. It was shown that for the most important compartment, the plasma, the uncertainty evaluated for the HMGU model was much smaller than that for the ICRP model; that phenomenon was observed for other organs and tissues as well. The uncertainty of the integral of the radioactivity of Zr up to 50 y calculated by the HMGU model after ingestion by adult members of the public was shown to be smaller by a factor of two than that of the ICRP model. It was also shown that the distribution type of the model parameter strongly influences the model prediction, and the correlation of the model input parameters affects the model prediction to a certain extent depending on the strength of the correlation. In the case of model prediction, the qualitative comparison of the model predictions with the measured plasma and urinary data showed the HMGU model to be more reliable than the ICRP model; quantitatively, the uncertainty model prediction by the HMGU systemic biokinetic model is smaller than that of the ICRP model. The uncertainty information on the model parameters analyzed in this study was used in the second part of the paper regarding a sensitivity analysis of the Zr biokinetic models.

  6. Bayesian calibration of mechanistic aquatic biogeochemical models and benefits for environmental management

    NASA Astrophysics Data System (ADS)

    Arhonditsis, George B.; Papantou, Dimitra; Zhang, Weitao; Perhar, Gurbir; Massos, Evangelia; Shi, Molu

    2008-09-01

    Aquatic biogeochemical models have been an indispensable tool for addressing pressing environmental issues, e.g., understanding oceanic response to climate change, elucidation of the interplay between plankton dynamics and atmospheric CO 2 levels, and examination of alternative management schemes for eutrophication control. Their ability to form the scientific basis for environmental management decisions can be undermined by the underlying structural and parametric uncertainty. In this study, we outline how we can attain realistic predictive links between management actions and ecosystem response through a probabilistic framework that accommodates rigorous uncertainty analysis of a variety of error sources, i.e., measurement error, parameter uncertainty, discrepancy between model and natural system. Because model uncertainty analysis essentially aims to quantify the joint probability distribution of model parameters and to make inference about this distribution, we believe that the iterative nature of Bayes' Theorem is a logical means to incorporate existing knowledge and update the joint distribution as new information becomes available. The statistical methodology begins with the characterization of parameter uncertainty in the form of probability distributions, then water quality data are used to update the distributions, and yield posterior parameter estimates along with predictive uncertainty bounds. Our illustration is based on a six state variable (nitrate, ammonium, dissolved organic nitrogen, phytoplankton, zooplankton, and bacteria) ecological model developed for gaining insight into the mechanisms that drive plankton dynamics in a coastal embayment; the Gulf of Gera, Island of Lesvos, Greece. The lack of analytical expressions for the posterior parameter distributions was overcome using Markov chain Monte Carlo simulations; a convenient way to obtain representative samples of parameter values. The Bayesian calibration resulted in realistic reproduction of the key temporal patterns of the system, offered insights into the degree of information the data contain about model inputs, and also allowed the quantification of the dependence structure among the parameter estimates. Finally, our study uses two synthetic datasets to examine the ability of the updated model to provide estimates of predictive uncertainty for water quality variables of environmental management interest.

  7. Uncertainty characterization and quantification in air pollution models. Application to the CHIMERE model

    NASA Astrophysics Data System (ADS)

    Debry, Edouard; Mallet, Vivien; Garaud, Damien; Malherbe, Laure; Bessagnet, Bertrand; Rouïl, Laurence

    2010-05-01

    Prev'Air is the French operational system for air pollution forecasting. It is developed and maintained by INERIS with financial support from the French Ministry for Environment. On a daily basis it delivers forecasts up to three days ahead for ozone, nitrogene dioxide and particles over France and Europe. Maps of concentration peaks and daily averages are freely available to the general public. More accurate data can be provided to customers and modelers. Prev'Air forecasts are based on the Chemical Transport Model CHIMERE. French authorities rely more and more on this platform to alert the general public in case of high pollution events and to assess the efficiency of regulation measures when such events occur. For example the road speed limit may be reduced in given areas when the ozone level exceeds one regulatory threshold. These operational applications require INERIS to assess the quality of its forecasts and to sensitize end users about the confidence level. Indeed concentrations always remain an approximation of the true concentrations because of the high uncertainty on input data, such as meteorological fields and emissions, because of incomplete or inaccurate representation of physical processes, and because of efficiencies in numerical integration [1]. We would like to present in this communication the uncertainty analysis of the CHIMERE model led in the framework of an INERIS research project aiming, on the one hand, to assess the uncertainty of several deterministic models and, on the other hand, to propose relevant indicators describing air quality forecast and their uncertainty. There exist several methods to assess the uncertainty of one model. Under given assumptions the model may be differentiated into an adjoint model which directly provides the concentrations sensitivity to given parameters. But so far Monte Carlo methods seem to be the most widely and oftenly used [2,3] as they are relatively easy to implement. In this framework one probability density function (PDF) is associated with an input parameter, according to its assumed uncertainty. Then the combined PDFs are propagated into the model, by means of several simulations with randomly perturbed input parameters. One may then obtain an approximation of the PDF of modeled concentrations, provided the Monte Carlo process has reasonably converged. The uncertainty analysis with CHIMERE has been led with a Monte Carlo method on the French domain and on two periods : 13 days during January 2009, with a focus on particles, and 28 days during August 2009, with a focus on ozone. The results show that for the summer period and 500 simulations, the time and space averaged standard deviation for ozone is 16 µg/m3, to be compared with an averaged concentration of 89 µg/m3. It is noteworthy that the space averaged standard deviation for ozone is relatively constant over time (the standard deviation of the timeseries itself is 1.6 µg/m3). The space variation of the ozone standard deviation seems to indicate that emissions have a significant impact, followed by western boundary conditions. Monte Carlo simulations are then post-processed by both ensemble [4] and Bayesian [5] methods in order to assess the quality of the uncertainty estimation. (1) Rao, K.S. Uncertainty Analysis in Atmospheric Dispersion Modeling, Pure and Applied Geophysics, 2005, 162, 1893-1917. (2) Beekmann, M. and Derognat, C. Monte Carlo uncertainty analysis of a regional-scale transport chemistry model constrained by measurements from the Atmospheric Pollution Over the Paris Area (ESQUIF) campaign, Journal of Geophysical Research, 2003, 108, 8559-8576. (3) Hanna, S.R. and Lu, Z. and Frey, H.C. and Wheeler, N. and Vukovich, J. and Arunachalam, S. and Fernau, M. and Hansen, D.A. Uncertainties in predicted ozone concentrations due to input uncertainties for the UAM-V photochemical grid model applied to the July 1995 OTAG domain, Atmospheric Environment, 2001, 35, 891-903. (4) Mallet, V., and B. Sportisse (2006), Uncertainty in a chemistry-transport model due to physical parameterizations and numerical approximations: An ensemble approach applied to ozone modeling, J. Geophys. Res., 111, D01302, doi:10.1029/2005JD006149. (5) Romanowicz, R. and Higson, H. and Teasdale, I. Bayesian uncertainty estimation methodology applied to air pollution modelling, Environmetrics, 2000, 11, 351-371.

  8. Robust fault-tolerant tracking control design for spacecraft under control input saturation.

    PubMed

    Bustan, Danyal; Pariz, Naser; Sani, Seyyed Kamal Hosseini

    2014-07-01

    In this paper, a continuous globally stable tracking control algorithm is proposed for a spacecraft in the presence of unknown actuator failure, control input saturation, uncertainty in inertial matrix and external disturbances. The design method is based on variable structure control and has the following properties: (1) fast and accurate response in the presence of bounded disturbances; (2) robust to the partial loss of actuator effectiveness; (3) explicit consideration of control input saturation; and (4) robust to uncertainty in inertial matrix. In contrast to traditional fault-tolerant control methods, the proposed controller does not require knowledge of the actuator faults and is implemented without explicit fault detection and isolation processes. In the proposed controller a single parameter is adjusted dynamically in such a way that it is possible to prove that both attitude and angular velocity errors will tend to zero asymptotically. The stability proof is based on a Lyapunov analysis and the properties of the singularity free quaternion representation of spacecraft dynamics. Results of numerical simulations state that the proposed controller is successful in achieving high attitude performance in the presence of external disturbances, actuator failures, and control input saturation. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Statistical uncertainty analysis applied to the DRAGONv4 code lattice calculations and based on JENDL-4 covariance data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez-Solis, A.; Demaziere, C.; Ekberg, C.

    2012-07-01

    In this paper, multi-group microscopic cross-section uncertainty is propagated through the DRAGON (Version 4) lattice code, in order to perform uncertainty analysis on k{infinity} and 2-group homogenized macroscopic cross-sections predictions. A statistical methodology is employed for such purposes, where cross-sections of certain isotopes of various elements belonging to the 172 groups DRAGLIB library format, are considered as normal random variables. This library is based on JENDL-4 data, because JENDL-4 contains the largest amount of isotopic covariance matrixes among the different major nuclear data libraries. The aim is to propagate multi-group nuclide uncertainty by running the DRAGONv4 code 500 times, andmore » to assess the output uncertainty of a test case corresponding to a 17 x 17 PWR fuel assembly segment without poison. The chosen sampling strategy for the current study is Latin Hypercube Sampling (LHS). The quasi-random LHS allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. Output uncertainty assessment is based on the tolerance limits concept, where the sample formed by the code calculations infers to cover 95% of the output population with at least a 95% of confidence. This analysis is the first attempt to propagate parameter uncertainties of modern multi-group libraries, which are used to feed advanced lattice codes that perform state of the art resonant self-shielding calculations such as DRAGONv4. (authors)« less

  10. Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion

    NASA Astrophysics Data System (ADS)

    Li, Z.; Ghaith, M.

    2017-12-01

    Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.

  11. A General Uncertainty Quantification Methodology for Cloud Microphysical Property Retrievals

    NASA Astrophysics Data System (ADS)

    Tang, Q.; Xie, S.; Chen, X.; Zhao, C.

    2014-12-01

    The US Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) program provides long-term (~20 years) ground-based cloud remote sensing observations. However, there are large uncertainties in the retrieval products of cloud microphysical properties based on the active and/or passive remote-sensing measurements. To address this uncertainty issue, a DOE Atmospheric System Research scientific focus study, Quantification of Uncertainties in Cloud Retrievals (QUICR), has been formed. In addition to an overview of recent progress of QUICR, we will demonstrate the capacity of an observation-based general uncertainty quantification (UQ) methodology via the ARM Climate Research Facility baseline cloud microphysical properties (MICROBASE) product. This UQ method utilizes the Karhunen-Loéve expansion (KLE) and Central Limit Theorems (CLT) to quantify the retrieval uncertainties from observations and algorithm parameters. The input perturbations are imposed on major modes to take into account the cross correlations between input data, which greatly reduces the dimension of random variables (up to a factor of 50) and quantifies vertically resolved full probability distribution functions of retrieved quantities. Moreover, this KLE/CLT approach has the capability of attributing the uncertainties in the retrieval output to individual uncertainty source and thus sheds light on improving the retrieval algorithm and observations. We will present the results of a case study for the ice water content at the Southern Great Plains during an intensive observing period on March 9, 2000. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  12. Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2014-01-01

    This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.

  13. A new moving strategy for the sequential Monte Carlo approach in optimizing the hydrological model parameters

    NASA Astrophysics Data System (ADS)

    Zhu, Gaofeng; Li, Xin; Ma, Jinzhu; Wang, Yunquan; Liu, Shaomin; Huang, Chunlin; Zhang, Kun; Hu, Xiaoli

    2018-04-01

    Sequential Monte Carlo (SMC) samplers have become increasing popular for estimating the posterior parameter distribution with the non-linear dependency structures and multiple modes often present in hydrological models. However, the explorative capabilities and efficiency of the sampler depends strongly on the efficiency in the move step of SMC sampler. In this paper we presented a new SMC sampler entitled the Particle Evolution Metropolis Sequential Monte Carlo (PEM-SMC) algorithm, which is well suited to handle unknown static parameters of hydrologic model. The PEM-SMC sampler is inspired by the works of Liang and Wong (2001) and operates by incorporating the strengths of the genetic algorithm, differential evolution algorithm and Metropolis-Hasting algorithm into the framework of SMC. We also prove that the sampler admits the target distribution to be a stationary distribution. Two case studies including a multi-dimensional bimodal normal distribution and a conceptual rainfall-runoff hydrologic model by only considering parameter uncertainty and simultaneously considering parameter and input uncertainty show that PEM-SMC sampler is generally superior to other popular SMC algorithms in handling the high dimensional problems. The study also indicated that it may be important to account for model structural uncertainty by using multiplier different hydrological models in the SMC framework in future study.

  14. Data uncertainties in material flow analysis: Municipal solid waste management system in Maputo City, Mozambique.

    PubMed

    Dos Muchangos, Leticia Sarmento; Tokai, Akihiro; Hanashima, Atsuko

    2017-01-01

    Material flow analysis can effectively trace and quantify the flows and stocks of materials such as solid wastes in urban environments. However, the integrity of material flow analysis results is compromised by data uncertainties, an occurrence that is particularly acute in low-and-middle-income study contexts. This article investigates the uncertainties in the input data and their effects in a material flow analysis study of municipal solid waste management in Maputo City, the capital of Mozambique. The analysis is based on data collected in 2007 and 2014. Initially, the uncertainties and their ranges were identified by the data classification model of Hedbrant and Sörme, followed by the application of sensitivity analysis. The average lower and upper bounds were 29% and 71%, respectively, in 2007, increasing to 41% and 96%, respectively, in 2014. This indicates higher data quality in 2007 than in 2014. Results also show that not only data are partially missing from the established flows such as waste generation to final disposal, but also that they are limited and inconsistent in emerging flows and processes such as waste generation to material recovery (hence the wider variation in the 2014 parameters). The sensitivity analysis further clarified the most influencing parameter and the degree of influence of each parameter on the waste flows and the interrelations among the parameters. The findings highlight the need for an integrated municipal solid waste management approach to avoid transferring or worsening the negative impacts among the parameters and flows.

  15. Orbit control of a stratospheric satellite with parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Xu, Ming; Huo, Wei

    2016-12-01

    When a stratospheric satellite travels by prevailing winds in the stratosphere, its cross-track displacement needs to be controlled to keep a constant latitude orbital flight. To design the orbit control system, a 6 degree-of-freedom (DOF) model of the satellite is established based on the second Lagrangian formulation, it is proven that the input/output feedback linearization theory cannot be directly implemented for the orbit control with this model, thus three subsystem models are deduced from the 6-DOF model to develop a sequential nonlinear control strategy. The control strategy includes an adaptive controller for the balloon-tether subsystem with uncertain balloon parameters, a PD controller based on feedback linearization for the tether-sail subsystem, and a sliding mode controller for the sail-rudder subsystem with uncertain sail parameters. Simulation studies demonstrate that the proposed control strategy is robust to uncertainties and satisfies high precision requirements for the orbit flight of the satellite.

  16. Impact of Uncertainties in Meteorological Forcing Data and Land Surface Parameters on Global Estimates of Terrestrial Water Balance Components

    NASA Astrophysics Data System (ADS)

    Nasonova, O. N.; Gusev, Ye. M.; Kovalev, Ye. E.

    2009-04-01

    Global estimates of the components of terrestrial water balance depend on a technique of estimation and on the global observational data sets used for this purpose. Land surface modelling is an up-to-date and powerful tool for such estimates. However, the results of modelling are affected by the quality of both a model and input information (including meteorological forcing data and model parameters). The latter is based on available global data sets containing meteorological data, land-use information, and soil and vegetation characteristics. Now there are a lot of global data sets, which differ in spatial and temporal resolution, as well as in accuracy and reliability. Evidently, uncertainties in global data sets will influence the results of model simulations, but to which extent? The present work is an attempt to investigate this issue. The work is based on the land surface model SWAP (Soil Water - Atmosphere - Plants) and global 1-degree data sets on meteorological forcing data and the land surface parameters, provided within the framework of the Second Global Soil Wetness Project (GSWP-2). The 3-hourly near-surface meteorological data (for the period from 1 July 1982 to 31 December 1995) are based on reanalyses and gridded observational data used in the International Satellite Land-Surface Climatology Project (ISLSCP) Initiative II. Following the GSWP-2 strategy, we used a number of alternative global forcing data sets to perform different sensitivity experiments (with six alternative versions of precipitation, four versions of radiation, two pure reanalysis products and two fully hybridized products of meteorological data). To reveal the influence of model parameters on simulations, in addition to GSWP-2 parameter data sets, we produced two alternative global data sets with soil parameters on the basis of their relationships with the content of clay and sand in a soil. After this the sensitivity experiments with three different sets of parameters were performed. As a result, 16 variants of global annual estimates of water balance components were obtained. Application of alternative data sets on radiation, precipitation, and soil parameters allowed us to reveal the influence of uncertainties in input data on global estimates of water balance components.

  17. From Atmospheric Neutrinos to the Neutrino Mass Hierarchy

    NASA Astrophysics Data System (ADS)

    Kappes, A.

    2015-08-01

    After a brief introduction to neutrino oscillation, the article discusses how proposed detectors like PINGU and ORCA can use atmospheric neutrinos in the GeV range to determine the neutrino mass hierarchy, one of the crucial unknowns in the neutrino sector of particle physics, and what uncertainties on external input parameters have to be taken into account.

  18. Uncertainty Analysis of Decomposing Polyurethane Foam

    NASA Technical Reports Server (NTRS)

    Hobbs, Michael L.; Romero, Vicente J.

    2000-01-01

    Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. Related polyurethane binders such as Estane are used in many energetic materials of interest to the JANNAF community. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state decomposition front velocity calculated as the derivative of the decomposition front location versus time. An analytical mean value sensitivity/uncertainty (MV) analysis was used to determine the standard deviation by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation was essentially determined from a second derivative that was extremely sensitive to numerical noise. To minimize the numerical noise, 50-micrometer element dimensions and approximately 1-msec time steps were required to obtain stable uncertainty results. As an alternative method to determine the uncertainty and sensitivity in the decomposition front velocity, surrogate response surfaces were generated for use with a constrained Latin Hypercube Sampling (LHS) technique. Two surrogate response surfaces were investigated: 1) a linear surrogate response surface (LIN) and 2) a quadratic response surface (QUAD). The LHS techniques do not require derivatives of the response variable and are subsequently relatively insensitive to numerical noise. To compare the LIN and QUAD methods to the MV method, a direct LHS analysis (DLHS) was performed using the full grid and timestep resolved finite element model. The surrogate response models (LIN and QUAD) are shown to give acceptable values of the mean and standard deviation when compared to the fully converged DLHS model.

  19. M-MRAC Backstepping for Systems with Unknown Virtual Control Coefficients

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram; Krishnakumar, Kalmanje

    2015-01-01

    The paper presents an over-parametrization free certainty equivalence state feedback backstepping adaptive control design method for systems of any relative degree with unmatched uncertainties and unknown virtual control coefficients. It uses a fast prediction model to estimate the unknown parameters, which is independent of the control design. It is shown that the system's input and output tracking errors can be systematically decreased by the proper choice of the design parameters. The benefits of the approach are demonstrated in numerical simulations.

  20. Identification procedure for epistemic uncertainties using inverse fuzzy arithmetic

    NASA Astrophysics Data System (ADS)

    Haag, T.; Herrmann, J.; Hanss, M.

    2010-10-01

    For the mathematical representation of systems with epistemic uncertainties, arising, for example, from simplifications in the modeling procedure, models with fuzzy-valued parameters prove to be a suitable and promising approach. In practice, however, the determination of these parameters turns out to be a non-trivial problem. The identification procedure to appropriately update these parameters on the basis of a reference output (measurement or output of an advanced model) requires the solution of an inverse problem. Against this background, an inverse method for the computation of the fuzzy-valued parameters of a model with epistemic uncertainties is presented. This method stands out due to the fact that it only uses feedforward simulations of the model, based on the transformation method of fuzzy arithmetic, along with the reference output. An inversion of the system equations is not necessary. The advancement of the method presented in this paper consists of the identification of multiple input parameters based on a single reference output or measurement. An optimization is used to solve the resulting underdetermined problems by minimizing the uncertainty of the identified parameters. Regions where the identification procedure is reliable are determined by the computation of a feasibility criterion which is also based on the output data of the transformation method only. For a frequency response function of a mechanical system, this criterion allows a restriction of the identification process to some special range of frequency where its solution can be guaranteed. Finally, the practicability of the method is demonstrated by covering the measured output of a fluid-filled piping system by the corresponding uncertain FE model in a conservative way.

  1. Parameter-induced uncertainty quantification of a regional N2O and NO3 inventory using the biogeochemical model LandscapeDNDC

    NASA Astrophysics Data System (ADS)

    Haas, Edwin; Klatt, Steffen; Kraus, David; Werner, Christian; Ruiz, Ignacio Santa Barbara; Kiese, Ralf; Butterbach-Bahl, Klaus

    2014-05-01

    Numerical simulation models are increasingly used to estimate greenhouse gas emissions at site to regional and national scales and are outlined as the most advanced methodology (Tier 3) for national emission inventory in the framework of UNFCCC reporting. Process-based models incorporate the major processes of the carbon and nitrogen cycle of terrestrial ecosystems like arable land and grasslands and are thus thought to be widely applicable at various spatial and temporal scales. The high complexity of ecosystem processes mirrored by such models requires a large number of model parameters. Many of those parameters are lumped parameters describing simultaneously the effect of environmental drivers on e.g. microbial community activity and individual processes. Thus, the precise quantification of true parameter states is often difficult or even impossible. As a result model uncertainty is not solely originating from input uncertainty but also subject to parameter-induced uncertainty. In this study we quantify regional parameter-induced model uncertainty on nitrous oxide (N2O) emissions and nitrate (NO3) leaching from arable soils of Saxony (Germany) using the biogeochemical model LandscapeDNDC. For this we calculate a regional inventory using a joint parameter distribution for key parameters describing microbial C and N turnover processes as obtained by a Bayesian calibration study. We representatively sampled 400 different parameter vectors from the discrete joint parameter distribution comprising approximately 400,000 parameter combinations and used these to calculate 400 individual realizations of the regional inventory. The spatial domain (represented by 4042 polygons) is set up with spatially explicit soil and climate information and a region-typical 3-year crop rotation consisting of winter wheat, rape- seed, and winter barley. Average N2O emission from arable soils in the state of Saxony across all 400 realizations was 1.43 ± 1.25 [kg N / ha] with a median value of 1.05 [kg N / ha]. Using the default IPCC emission factor approach (Tier 1) for direct emissions reveal a higher average N2O emission of 1.51 [kg N / ha] due to fertilizer use. In the regional uncertainty quantification the 20% likelihood range for N2O emissions is 0.79 - 1.37 [kg N / ha] (50% likelihood: 0.46 - 2.05 [kg N / ha]; 90% likelihood: 0.11 - 4.03 [kg N / ha]). Respective quantities were calculated for nitrate leaching. The method has proven its applicability to quantify parameter-induced uncertainty of simulated regional greenhouse gas emission and nitrate leaching inventories using process based biogeochemical models.

  2. Combined input shaping and feedback control for double-pendulum systems

    NASA Astrophysics Data System (ADS)

    Mar, Robert; Goyal, Anurag; Nguyen, Vinh; Yang, Tianle; Singhose, William

    2017-02-01

    A control system combining input shaping and feedback is developed for double-pendulum systems subjected to external disturbances. The proposed control method achieves fast point-to-point response similar to open-loop input-shaping control. It also minimizes transient deflections during the motion of the system, and disturbance-induced residual swing using the feedback control. Effects of parameter variations such as the mass ratio of the double pendulum, the suspension length ratio, and the move distance were studied via numerical simulation. The most important results were also verified with experiments on a small-scale crane. The controller effectively suppresses the disturbances and is robust to modelling uncertainties and task variations.

  3. On the formulation of a minimal uncertainty model for robust control with structured uncertainty

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert

    1991-01-01

    In the design and analysis of robust control systems for uncertain plants, representing the system transfer matrix in the form of what has come to be termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents a transfer function matrix M(s) of the nominal closed loop system, and the delta represents an uncertainty matrix acting on M(s). The nominal closed loop system M(s) results from closing the feedback control system, K(s), around a nominal plant interconnection structure P(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unsaturated uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, but for real parameter variations delta is a diagonal matrix of real elements. Conceptually, the M-delta structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the currently available literature addresses computational methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty, where the term minimal refers to the dimension of the delta matrix. Since having a minimally dimensioned delta matrix would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta would be useful. Hence, a method of obtaining the interconnection system P(s) is required. A generalized procedure for obtaining a minimal P-delta structure for systems with real parameter variations is presented. Using this model, the minimal M-delta model can then be easily obtained by closing the feedback loop. The procedure involves representing the system in a cascade-form state-space realization, determining the minimal uncertainty matrix, delta, and constructing the state-space representation of P(s). Three examples are presented to illustrate the procedure.

  4. A New Approach for Obtaining Cosmological Constraints from Type Ia Supernovae using Approximate Bayesian Computation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jennings, Elise; Wolf, Rachel; Sako, Masao

    2016-11-09

    Cosmological parameter estimation techniques that robustly account for systematic measurement uncertainties will be crucial for the next generation of cosmological surveys. We present a new analysis method, superABC, for obtaining cosmological constraints from Type Ia supernova (SN Ia) light curves using Approximate Bayesian Computation (ABC) without any likelihood assumptions. The ABC method works by using a forward model simulation of the data where systematic uncertainties can be simulated and marginalized over. A key feature of the method presented here is the use of two distinct metrics, the `Tripp' and `Light Curve' metrics, which allow us to compare the simulated data to the observed data set. The Tripp metric takes as input the parameters of models fit to each light curve with the SALT-II method, whereas the Light Curve metric uses the measured fluxes directly without model fitting. We apply the superABC sampler to a simulated data set ofmore » $$\\sim$$1000 SNe corresponding to the first season of the Dark Energy Survey Supernova Program. Varying $$\\Omega_m, w_0, \\alpha$$ and $$\\beta$$ and a magnitude offset parameter, with no systematics we obtain $$\\Delta(w_0) = w_0^{\\rm true} - w_0^{\\rm best \\, fit} = -0.036\\pm0.109$$ (a $$\\sim11$$% 1$$\\sigma$$ uncertainty) using the Tripp metric and $$\\Delta(w_0) = -0.055\\pm0.068$$ (a $$\\sim7$$% 1$$\\sigma$$ uncertainty) using the Light Curve metric. Including 1% calibration uncertainties in four passbands, adding 4 more parameters, we obtain $$\\Delta(w_0) = -0.062\\pm0.132$$ (a $$\\sim14$$% 1$$\\sigma$$ uncertainty) using the Tripp metric. Overall we find a $17$% increase in the uncertainty on $$w_0$$ with systematics compared to without. We contrast this with a MCMC approach where systematic effects are approximately included. We find that the MCMC method slightly underestimates the impact of calibration uncertainties for this simulated data set.« less

  5. Probabilistic Mass Growth Uncertainties

    NASA Technical Reports Server (NTRS)

    Plumer, Eric; Elliott, Darren

    2013-01-01

    Mass has been widely used as a variable input parameter for Cost Estimating Relationships (CER) for space systems. As these space systems progress from early concept studies and drawing boards to the launch pad, their masses tend to grow substantially, hence adversely affecting a primary input to most modeling CERs. Modeling and predicting mass uncertainty, based on historical and analogous data, is therefore critical and is an integral part of modeling cost risk. This paper presents the results of a NASA on-going effort to publish mass growth datasheet for adjusting single-point Technical Baseline Estimates (TBE) of masses of space instruments as well as spacecraft, for both earth orbiting and deep space missions at various stages of a project's lifecycle. This paper will also discusses the long term strategy of NASA Headquarters in publishing similar results, using a variety of cost driving metrics, on an annual basis. This paper provides quantitative results that show decreasing mass growth uncertainties as mass estimate maturity increases. This paper's analysis is based on historical data obtained from the NASA Cost Analysis Data Requirements (CADRe) database.

  6. Application of Monte Carlo Methods to Perform Uncertainty and Sensitivity Analysis on Inverse Water-Rock Reactions with NETPATH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGraw, David; Hershey, Ronald L.

    Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries.more » The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little variation in source-water fraction between the deterministic and Monte Carlo approaches, and therefore, little variation in travel times between approaches. Sensitivity analysis proved very useful for identifying the most important input constraints (dissolved-ion concentrations), which can reveal the variables that have the most influence on source-water fractions and carbon-14 travel times. Once these variables are determined, more focused effort can be applied to determining the proper distribution for each constraint. Second, Monte Carlo results for water-rock reaction modeling showed discrete and nonunique results. The NETPATH models provide the solutions that satisfy the constraints of upgradient and downgradient water chemistry. There can exist multiple, discrete solutions for any scenario and these discrete solutions cause grouping of results. As a result, the variability in output may not easily be represented by a single distribution or a mean and variance and care should be taken in the interpretation and reporting of results.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alvarez-Ramirez, J.; Aguilar, R.; Lopez-Isunza, F.

    FCC processes involve complex interactive dynamics which are difficult to operate and control as well as poorly known reaction kinetics. This work concerns the synthesis of temperature controllers for FCC units. The problem is addressed first for the case where perfect knowledge of the reaction kinetics is assumed, leading to an input-output linearizing state feedback. However, in most industrial FCC units, perfect knowledge of reaction kinetics and composition measurements is not available. To address the problem of robustness against uncertainties in the reaction kinetics, an adaptive model-based nonlinear controller with simplified reaction models is presented. The adaptive strategy makes usemore » of estimates of uncertainties derived from calorimetric (energy) balances. The resulting controller is similar in form to standard input-output linearizing controllers and can be tuned analogously. Alternatively, the controller can be tuned using a single gain parameter and is computationally efficient. The performance of the closed-loop system and the controller design procedure are shown with simulations.« less

  8. Validation of Heat Transfer Thermal Decomposition and Container Pressurization of Polyurethane Foam.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, Sarah Nicole; Dodd, Amanda B.; Larsen, Marvin E.

    Polymer foam encapsulants provide mechanical, electrical, and thermal isolation in engineered systems. In fire environments, gas pressure from thermal decomposition of polymers can cause mechanical failure of sealed systems. In this work, a detailed uncertainty quantification study of PMDI-based polyurethane foam is presented to assess the validity of the computational model. Both experimental measurement uncertainty and model prediction uncertainty are examined and compared. Both the mean value method and Latin hypercube sampling approach are used to propagate the uncertainty through the model. In addition to comparing computational and experimental results, the importance of each input parameter on the simulation resultmore » is also investigated. These results show that further development in the physics model of the foam and appropriate associated material testing are necessary to improve model accuracy.« less

  9. Emulation and Sobol' sensitivity analysis of an atmospheric dispersion model applied to the Fukushima nuclear accident

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Mallet, Vivien; Korsakissok, Irène; Mathieu, Anne

    2016-04-01

    Simulations of the atmospheric dispersion of radionuclides involve large uncertainties originating from the limited knowledge of meteorological input data, composition, amount and timing of emissions, and some model parameters. The estimation of these uncertainties is an essential complement to modeling for decision making in case of an accidental release. We have studied the relative influence of a set of uncertain inputs on several outputs from the Eulerian model Polyphemus/Polair3D on the Fukushima case. We chose to use the variance-based sensitivity analysis method of Sobol'. This method requires a large number of model evaluations which was not achievable directly due to the high computational cost of Polyphemus/Polair3D. To circumvent this issue, we built a mathematical approximation of the model using Gaussian process emulation. We observed that aggregated outputs are mainly driven by the amount of emitted radionuclides, while local outputs are mostly sensitive to wind perturbations. The release height is notably influential, but only in the vicinity of the source. Finally, averaging either spatially or temporally tends to cancel out interactions between uncertain inputs.

  10. A modified micrometeorological gradient method for estimating O3 dry depositions over a forest canopy

    NASA Astrophysics Data System (ADS)

    Wu, Z. Y.; Zhang, L.; Wang, X. M.; Munger, J. W.

    2015-07-01

    Small pollutant concentration gradients between levels above a plant canopy result in large uncertainties in estimated air-surface exchange fluxes when using existing micrometeorological gradient methods, including the aerodynamic gradient method (AGM) and the modified Bowen ratio method (MBR). A modified micrometeorological gradient method (MGM) is proposed in this study for estimating O3 dry deposition fluxes over a forest canopy using concentration gradients between a level above and a level below the canopy top, taking advantage of relatively large gradients between these levels due to significant pollutant uptake in the top layers of the canopy. The new method is compared with the AGM and MBR methods and is also evaluated using eddy-covariance (EC) flux measurements collected at the Harvard Forest Environmental Measurement Site, Massachusetts, during 1993-2000. All three gradient methods (AGM, MBR, and MGM) produced similar diurnal cycles of O3 dry deposition velocity (Vd(O3)) to the EC measurements, with the MGM method being the closest in magnitude to the EC measurements. The multi-year average Vd(O3) differed significantly between these methods, with the AGM, MBR, and MGM method being 2.28, 1.45, and 1.18 times that of the EC, respectively. Sensitivity experiments identified several input parameters for the MGM method as first-order parameters that affect the estimated Vd(O3). A 10% uncertainty in the wind speed attenuation coefficient or canopy displacement height can cause about 10% uncertainty in the estimated Vd(O3). An unrealistic leaf area density vertical profile can cause an uncertainty of a factor of 2.0 in the estimated Vd(O3). Other input parameters or formulas for stability functions only caused an uncertainly of a few percent. The new method provides an alternative approach to monitoring/estimating long-term deposition fluxes of similar pollutants over tall canopies.

  11. Uncertainty Quantification of Evapotranspiration and Infiltration from Modeling and Historic Time Series at the Savannah River F-Area

    NASA Astrophysics Data System (ADS)

    Faybishenko, B.; Flach, G. P.

    2012-12-01

    The objectives of this presentation are: (a) to illustrate the application of Monte Carlo and fuzzy-probabilistic approaches for uncertainty quantification (UQ) in predictions of potential evapotranspiration (PET), actual evapotranspiration (ET), and infiltration (I), using uncertain hydrological or meteorological time series data, and (b) to compare the results of these calculations with those from field measurements at the U.S. Department of Energy Savannah River Site (SRS), near Aiken, South Carolina, USA. The UQ calculations include the evaluation of aleatory (parameter uncertainty) and epistemic (model) uncertainties. The effect of aleatory uncertainty is expressed by assigning the probability distributions of input parameters, using historical monthly averaged data from the meteorological station at the SRS. The combined effect of aleatory and epistemic uncertainties on the UQ of PET, ET, and Iis then expressed by aggregating the results of calculations from multiple models using a p-box and fuzzy numbers. The uncertainty in PETis calculated using the Bair-Robertson, Blaney-Criddle, Caprio, Hargreaves-Samani, Hamon, Jensen-Haise, Linacre, Makkink, Priestly-Taylor, Penman, Penman-Monteith, Thornthwaite, and Turc models. Then, ET is calculated from the modified Budyko model, followed by calculations of I from the water balance equation. We show that probabilistic and fuzzy-probabilistic calculations using multiple models generate the PET, ET, and Idistributions, which are well within the range of field measurements. We also show that a selection of a subset of models can be used to constrain the uncertainty quantification of PET, ET, and I.

  12. Material characterization in partially filled waveguides using inverse scattering and multiple sample orientations

    NASA Astrophysics Data System (ADS)

    Sjöberg, Daniel; Larsson, Christer

    2015-06-01

    We present a method aimed at reducing uncertainties and instabilities when characterizing materials in waveguide setups. The method is based on measuring the S parameters for three different orientations of a rectangular sample block in a rectangular waveguide. The corresponding geometries are modeled in a commercial full-wave simulation program, taking any material parameters as input. The material parameters of the sample are found by minimizing the squared distance between measured and calculated S parameters. The information added by the different sample orientations is quantified using the Cramér-Rao lower bound. The flexibility of the method allows the determination of material parameters of an arbitrarily shaped sample that fits in the waveguide.

  13. Probabilistic Radiological Performance Assessment Modeling and Uncertainty

    NASA Astrophysics Data System (ADS)

    Tauxe, J.

    2004-12-01

    A generic probabilistic radiological Performance Assessment (PA) model is presented. The model, built using the GoldSim systems simulation software platform, concerns contaminant transport and dose estimation in support of decision making with uncertainty. Both the U.S. Nuclear Regulatory Commission (NRC) and the U.S. Department of Energy (DOE) require assessments of potential future risk to human receptors of disposal of LLW. Commercially operated LLW disposal facilities are licensed by the NRC (or agreement states), and the DOE operates such facilities for disposal of DOE-generated LLW. The type of PA model presented is probabilistic in nature, and hence reflects the current state of knowledge about the site by using probability distributions to capture what is expected (central tendency or average) and the uncertainty (e.g., standard deviation) associated with input parameters, and propagating through the model to arrive at output distributions that reflect expected performance and the overall uncertainty in the system. Estimates of contaminant release rates, concentrations in environmental media, and resulting doses to human receptors well into the future are made by running the model in Monte Carlo fashion, with each realization representing a possible combination of input parameter values. Statistical summaries of the results can be compared to regulatory performance objectives, and decision makers are better informed of the inherently uncertain aspects of the model which supports their decision-making. While this information may make some regulators uncomfortable, they must realize that uncertainties which were hidden in a deterministic analysis are revealed in a probabilistic analysis, and the chance of making a correct decision is now known rather than hoped for. The model includes many typical features and processes that would be part of a PA, but is entirely fictitious. This does not represent any particular site and is meant to be a generic example. A practitioner could, however, start with this model as a GoldSim template and, by adding site specific features and parameter values (distributions), use this model as a starting point for a real model to be used in real decision making.

  14. Impact of input data uncertainty on environmental exposure assessment models: A case study for electromagnetic field modelling from mobile phone base stations.

    PubMed

    Beekhuizen, Johan; Heuvelink, Gerard B M; Huss, Anke; Bürgi, Alfred; Kromhout, Hans; Vermeulen, Roel

    2014-11-01

    With the increased availability of spatial data and computing power, spatial prediction approaches have become a standard tool for exposure assessment in environmental epidemiology. However, such models are largely dependent on accurate input data. Uncertainties in the input data can therefore have a large effect on model predictions, but are rarely quantified. With Monte Carlo simulation we assessed the effect of input uncertainty on the prediction of radio-frequency electromagnetic fields (RF-EMF) from mobile phone base stations at 252 receptor sites in Amsterdam, The Netherlands. The impact on ranking and classification was determined by computing the Spearman correlations and weighted Cohen's Kappas (based on tertiles of the RF-EMF exposure distribution) between modelled values and RF-EMF measurements performed at the receptor sites. The uncertainty in modelled RF-EMF levels was large with a median coefficient of variation of 1.5. Uncertainty in receptor site height, building damping and building height contributed most to model output uncertainty. For exposure ranking and classification, the heights of buildings and receptor sites were the most important sources of uncertainty, followed by building damping, antenna- and site location. Uncertainty in antenna power, tilt, height and direction had a smaller impact on model performance. We quantified the effect of input data uncertainty on the prediction accuracy of an RF-EMF environmental exposure model, thereby identifying the most important sources of uncertainty and estimating the total uncertainty stemming from potential errors in the input data. This approach can be used to optimize the model and better interpret model output. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Artificial neural network model for ozone concentration estimation and Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Gao, Meng; Yin, Liting; Ning, Jicai

    2018-07-01

    Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.

  16. Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.

    2002-01-01

    Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.

  17. Approximate Bayesian evaluations of measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio; Bodnar, Olha

    2018-04-01

    The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.

  18. Estimation of uncertainty in pKa values determined by potentiometric titration.

    PubMed

    Koort, Eve; Herodes, Koit; Pihl, Viljar; Leito, Ivo

    2004-06-01

    A procedure is presented for estimation of uncertainty in measurement of the pK(a) of a weak acid by potentiometric titration. The procedure is based on the ISO GUM. The core of the procedure is a mathematical model that involves 40 input parameters. A novel approach is used for taking into account the purity of the acid, the impurities are not treated as inert compounds only, their possible acidic dissociation is also taken into account. Application to an example of practical pK(a) determination is presented. Altogether 67 different sources of uncertainty are identified and quantified within the example. The relative importance of different uncertainty sources is discussed. The most important source of uncertainty (with the experimental set-up of the example) is the uncertainty of pH measurement followed by the accuracy of the burette and the uncertainty of weighing. The procedure gives uncertainty separately for each point of the titration curve. The uncertainty depends on the amount of titrant added, being lowest in the central part of the titration curve. The possibilities of reducing the uncertainty and interpreting the drift of the pK(a) values obtained from the same curve are discussed.

  19. Constraining uncertainties in water supply reliability in a tropical data scarce basin

    NASA Astrophysics Data System (ADS)

    Kaune, Alexander; Werner, Micha; Rodriguez, Erasmo; de Fraiture, Charlotte

    2015-04-01

    Assessing the water supply reliability in river basins is essential for adequate planning and development of irrigated agriculture and urban water systems. In many cases hydrological models are applied to determine the surface water availability in river basins. However, surface water availability and variability is often not appropriately quantified due to epistemic uncertainties, leading to water supply insecurity. The objective of this research is to determine the water supply reliability in order to support planning and development of irrigated agriculture in a tropical, data scarce environment. The approach proposed uses a simple hydrological model, but explicitly includes model parameter uncertainty. A transboundary river basin in the tropical region of Colombia and Venezuela with an approximately area of 2100 km² was selected as a case study. The Budyko hydrological framework was extended to consider climatological input variability and model parameter uncertainty, and through this the surface water reliability to satisfy the irrigation and urban demand was estimated. This provides a spatial estimate of the water supply reliability across the basin. For the middle basin the reliability was found to be less than 30% for most of the months when the water is extracted from an upstream source. Conversely, the monthly water supply reliability was high (r>98%) in the lower basin irrigation areas when water was withdrawn from a source located further downstream. Including model parameter uncertainty provides a complete estimate of the water supply reliability, but that estimate is influenced by the uncertainty in the model. Reducing the uncertainty in the model through improved data and perhaps improved model structure will improve the estimate of the water supply reliability allowing better planning of irrigated agriculture and dependable water allocation decisions.

  20. Validation and uncertainty analysis of a pre-treatment 2D dose prediction model

    NASA Astrophysics Data System (ADS)

    Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank

    2018-02-01

    Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.

  1. Analysis of the NAEG model of transuranic radionuclide transport and dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kercher, J.R.; Anspaugh, L.R.

    We analyze the model for estimating the dose from /sup 239/Pu developed for the Nevada Applied Ecology Group (NAEG) by using sensitivity analysis and uncertainty analysis. Sensitivity analysis results suggest that the air pathway is the critical pathway for the organs receiving the highest dose. Soil concentration and the factors controlling air concentration are the most important parameters. The only organ whose dose is sensitive to parameters in the ingestion pathway is the GI tract. The air pathway accounts for 100% of the dose to lung, upper respiratory tract, and thoracic lymph nodes; and 95% of its dose via ingestion.more » Leafy vegetable ingestion accounts for 70% of the dose from the ingestion pathway regardless of organ, peeled vegetables 20%; accidental soil ingestion 5%; ingestion of beef liver 4%; beef muscle 1%. Only a handful of model parameters control the dose for any one organ. The number of important parameters is usually less than 10. Uncertainty analysis indicates that choosing a uniform distribution for the input parameters produces a lognormal distribution of the dose. The ratio of the square root of the variance to the mean is three times greater for the doses than it is for the individual parameters. As found by the sensitivity analysis, the uncertainty analysis suggests that only a few parameters control the dose for each organ. All organs have similar distributions and variance to mean ratios except for the lymph modes. 16 references, 9 figures, 13 tables.« less

  2. The importance of parameterization when simulating the hydrologic response of vegetative land-cover change

    NASA Astrophysics Data System (ADS)

    White, Jeremy; Stengel, Victoria; Rendon, Samuel; Banta, John

    2017-08-01

    Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral in that they reproduce daily mean streamflow acceptably well according to Nash-Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.

  3. The importance of parameterization when simulating the hydrologic response of vegetative land-cover change

    USGS Publications Warehouse

    White, Jeremy; Stengel, Victoria G.; Rendon, Samuel H.; Banta, John

    2017-01-01

    Computer models of hydrologic systems are frequently used to investigate the hydrologic response of land-cover change. If the modeling results are used to inform resource-management decisions, then providing robust estimates of uncertainty in the simulated response is an important consideration. Here we examine the importance of parameterization, a necessarily subjective process, on uncertainty estimates of the simulated hydrologic response of land-cover change. Specifically, we applied the soil water assessment tool (SWAT) model to a 1.4 km2 watershed in southern Texas to investigate the simulated hydrologic response of brush management (the mechanical removal of woody plants), a discrete land-cover change. The watershed was instrumented before and after brush-management activities were undertaken, and estimates of precipitation, streamflow, and evapotranspiration (ET) are available; these data were used to condition and verify the model. The role of parameterization in brush-management simulation was evaluated by constructing two models, one with 12 adjustable parameters (reduced parameterization) and one with 1305 adjustable parameters (full parameterization). Both models were subjected to global sensitivity analysis as well as Monte Carlo and generalized likelihood uncertainty estimation (GLUE) conditioning to identify important model inputs and to estimate uncertainty in several quantities of interest related to brush management. Many realizations from both parameterizations were identified as behavioral in that they reproduce daily mean streamflow acceptably well according to Nash–Sutcliffe model efficiency coefficient, percent bias, and coefficient of determination. However, the total volumetric ET difference resulting from simulated brush management remains highly uncertain after conditioning to daily mean streamflow, indicating that streamflow data alone are not sufficient to inform the model inputs that influence the simulated outcomes of brush management the most. Additionally, the reduced-parameterization model grossly underestimates uncertainty in the total volumetric ET difference compared to the full-parameterization model; total volumetric ET difference is a primary metric for evaluating the outcomes of brush management. The failure of the reduced-parameterization model to provide robust uncertainty estimates demonstrates the importance of parameterization when attempting to quantify uncertainty in land-cover change simulations.

  4. Neural network-based preprocessing to estimate the parameters of the X-ray emission of a single-temperature thermal plasma

    NASA Astrophysics Data System (ADS)

    Ichinohe, Y.; Yamada, S.; Miyazaki, N.; Saito, S.

    2018-04-01

    We present data preprocessing based on an artificial neural network to estimate the parameters of the X-ray emission spectra of a single-temperature thermal plasma. The method finds appropriate parameters close to the global optimum. The neural network is designed to learn the parameters of the thermal plasma (temperature, abundance, normalization and redshift) of the input spectra. After training using 9000 simulated X-ray spectra, the network has grown to predict all the unknown parameters with uncertainties of about a few per cent. The performance dependence on the network structure has been studied. We applied the neural network to an actual high-resolution spectrum obtained with Hitomi. The predicted plasma parameters agree with the known best-fitting parameters of the Perseus cluster within uncertainties of ≲10 per cent. The result shows that neural networks trained by simulated data might possibly be used to extract a feature built in the data. This would reduce human-intensive preprocessing costs before detailed spectral analysis, and would help us make the best use of the large quantities of spectral data that will be available in the coming decades.

  5. Fast and optimized methodology to generate road traffic emission inventories and their uncertainties

    NASA Astrophysics Data System (ADS)

    Blond, N.; Ho, B. Q.; Clappier, A.

    2012-04-01

    Road traffic emissions are one of the main sources of air pollution in the cities. They are also the main sources of uncertainties in the air quality numerical models used to forecast and define abatement strategies. Until now, the available models for generating road traffic emission always required a big effort, money and time. This inhibits decisions to preserve air quality, especially in developing countries where road traffic emissions are changing very fast. In this research, we developed a new model designed to fast produce road traffic emission inventories. This model, called EMISENS, combines the well-known top-down and bottom-up approaches to force them to be coherent. A Monte Carlo methodology is included for computing emission uncertainties and the uncertainty rate due to each input parameters. This paper presents the EMISENS model and a demonstration of its capabilities through an application over Strasbourg region (Alsace), France. Same input data as collected for Circul'air model (using bottom-up approach) which has been applied for many years to forecast and study air pollution by the Alsatian air quality agency, ASPA, are used to evaluate the impact of several simplifications that a user could operate . These experiments give the possibility to review older methodologies and evaluate EMISENS results when few input data are available to produce emission inventories, as in developing countries and assumptions need to be done. We show that same average fraction of mileage driven with a cold engine can be used for all the cells of the study domain and one emission factor could replace both cold and hot emission factors.

  6. An uncertainty and sensitivity analysis applied to the prioritisation of pharmaceuticals as surface water contaminants from wastewater treatment plant direct emissions.

    PubMed

    Morais, Sérgio Alberto; Delerue-Matos, Cristina; Gabarrell, Xavier

    2014-08-15

    In this study, the concentration probability distributions of 82 pharmaceutical compounds detected in the effluents of 179 European wastewater treatment plants were computed and inserted into a multimedia fate model. The comparative ecotoxicological impact of the direct emission of these compounds from wastewater treatment plants on freshwater ecosystems, based on a potentially affected fraction (PAF) of species approach, was assessed to rank compounds based on priority. As many pharmaceuticals are acids or bases, the multimedia fate model accounts for regressions to estimate pH-dependent fate parameters. An uncertainty analysis was performed by means of Monte Carlo analysis, which included the uncertainty of fate and ecotoxicity model input variables, as well as the spatial variability of landscape characteristics on the European continental scale. Several pharmaceutical compounds were identified as being of greatest concern, including 7 analgesics/anti-inflammatories, 3 β-blockers, 3 psychiatric drugs, and 1 each of 6 other therapeutic classes. The fate and impact modelling relied extensively on estimated data, given that most of these compounds have little or no experimental fate or ecotoxicity data available, as well as a limited reported occurrence in effluents. The contribution of estimated model input variables to the variance of freshwater ecotoxicity impact, as well as the lack of experimental abiotic degradation data for most compounds, helped in establishing priorities for further testing. Generally, the effluent concentration and the ecotoxicity effect factor were the model input variables with the most significant effect on the uncertainty of output results. Copyright © 2014. Published by Elsevier B.V.

  7. An approach based on Hierarchical Bayesian Graphical Models for measurement interpretation under uncertainty

    NASA Astrophysics Data System (ADS)

    Skataric, Maja; Bose, Sandip; Zeroug, Smaine; Tilke, Peter

    2017-02-01

    It is not uncommon in the field of non-destructive evaluation that multiple measurements encompassing a variety of modalities are available for analysis and interpretation for determining the underlying states of nature of the materials or parts being tested. Despite and sometimes due to the richness of data, significant challenges arise in the interpretation manifested as ambiguities and inconsistencies due to various uncertain factors in the physical properties (inputs), environment, measurement device properties, human errors, and the measurement data (outputs). Most of these uncertainties cannot be described by any rigorous mathematical means, and modeling of all possibilities is usually infeasible for many real time applications. In this work, we will discuss an approach based on Hierarchical Bayesian Graphical Models (HBGM) for the improved interpretation of complex (multi-dimensional) problems with parametric uncertainties that lack usable physical models. In this setting, the input space of the physical properties is specified through prior distributions based on domain knowledge and expertise, which are represented as Gaussian mixtures to model the various possible scenarios of interest for non-destructive testing applications. Forward models are then used offline to generate the expected distribution of the proposed measurements which are used to train a hierarchical Bayesian network. In Bayesian analysis, all model parameters are treated as random variables, and inference of the parameters is made on the basis of posterior distribution given the observed data. Learned parameters of the posterior distribution obtained after the training can therefore be used to build an efficient classifier for differentiating new observed data in real time on the basis of pre-trained models. We will illustrate the implementation of the HBGM approach to ultrasonic measurements used for cement evaluation of cased wells in the oil industry.

  8. Analysis of Artificial Neural Network in Erosion Modeling: A Case Study of Serang Watershed

    NASA Astrophysics Data System (ADS)

    Arif, N.; Danoedoro, P.; Hartono

    2017-12-01

    Erosion modeling is an important measuring tool for both land users and decision makers to evaluate land cultivation and thus it is necessary to have a model to represent the actual reality. Erosion models are a complex model because of uncertainty data with different sources and processing procedures. Artificial neural networks can be relied on for complex and non-linear data processing such as erosion data. The main difficulty in artificial neural network training is the determination of the value of each network input parameters, i.e. hidden layer, momentum, learning rate, momentum, and RMS. This study tested the capability of artificial neural network application in the prediction of erosion risk with some input parameters through multiple simulations to get good classification results. The model was implemented in Serang Watershed, Kulonprogo, Yogyakarta which is one of the critical potential watersheds in Indonesia. The simulation results showed the number of iterations that gave a significant effect on the accuracy compared to other parameters. A small number of iterations can produce good accuracy if the combination of other parameters was right. In this case, one hidden layer was sufficient to produce good accuracy. The highest training accuracy achieved in this study was 99.32%, occurred in ANN 14 simulation with combination of network input parameters of 1 HL; LR 0.01; M 0.5; RMS 0.0001, and the number of iterations of 15000. The ANN training accuracy was not influenced by the number of channels, namely input dataset (erosion factors) as well as data dimensions, rather it was determined by changes in network parameters.

  9. Adaptive Optimal Control Using Frequency Selective Information of the System Uncertainty With Application to Unmanned Aircraft.

    PubMed

    Maity, Arnab; Hocht, Leonhard; Heise, Christian; Holzapfel, Florian

    2018-01-01

    A new efficient adaptive optimal control approach is presented in this paper based on the indirect model reference adaptive control (MRAC) architecture for improvement of adaptation and tracking performance of the uncertain system. The system accounts here for both matched and unmatched unknown uncertainties that can act as plant as well as input effectiveness failures or damages. For adaptation of the unknown parameters of these uncertainties, the frequency selective learning approach is used. Its idea is to compute a filtered expression of the system uncertainty using multiple filters based on online instantaneous information, which is used for augmentation of the update law. It is capable of adjusting a sudden change in system dynamics without depending on high adaptation gains and can satisfy exponential parameter error convergence under certain conditions in the presence of structured matched and unmatched uncertainties as well. Additionally, the controller of the MRAC system is designed using a new optimal control method. This method is a new linear quadratic regulator-based optimal control formulation for both output regulation and command tracking problems. It provides a closed-form control solution. The proposed overall approach is applied in a control of lateral dynamics of an unmanned aircraft problem to show its effectiveness.

  10. Impact of Hydrogeological Uncertainty on Estimation of Environmental Risks Posed by Hydrocarbon Transportation Networks

    NASA Astrophysics Data System (ADS)

    Ciriello, V.; Lauriola, I.; Bonvicini, S.; Cozzani, V.; Di Federico, V.; Tartakovsky, Daniel M.

    2017-11-01

    Ubiquitous hydrogeological uncertainty undermines the veracity of quantitative predictions of soil and groundwater contamination due to accidental hydrocarbon spills from onshore pipelines. Such predictions, therefore, must be accompanied by quantification of predictive uncertainty, especially when they are used for environmental risk assessment. We quantify the impact of parametric uncertainty on quantitative forecasting of temporal evolution of two key risk indices, volumes of unsaturated and saturated soil contaminated by a surface spill of light nonaqueous-phase liquids. This is accomplished by treating the relevant uncertain parameters as random variables and deploying two alternative probabilistic models to estimate their effect on predictive uncertainty. A physics-based model is solved with a stochastic collocation method and is supplemented by a global sensitivity analysis. A second model represents the quantities of interest as polynomials of random inputs and has a virtually negligible computational cost, which enables one to explore any number of risk-related contamination scenarios. For a typical oil-spill scenario, our method can be used to identify key flow and transport parameters affecting the risk indices, to elucidate texture-dependent behavior of different soils, and to evaluate, with a degree of confidence specified by the decision-maker, the extent of contamination and the correspondent remediation costs.

  11. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sig Drellack, Lance Prothro

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result ofmore » the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field-scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides in situ concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions.« less

  12. Uncertainty in simulating wheat yields under climate change

    NASA Astrophysics Data System (ADS)

    Asseng, S.; Ewert, F.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P. J.; Rötter, R. P.; Cammarano, D.; Brisson, N.; Basso, B.; Martre, P.; Aggarwal, P. K.; Angulo, C.; Bertuzzi, P.; Biernath, C.; Challinor, A. J.; Doltra, J.; Gayler, S.; Goldberg, R.; Grant, R.; Heng, L.; Hooker, J.; Hunt, L. A.; Ingwersen, J.; Izaurralde, R. C.; Kersebaum, K. C.; Müller, C.; Naresh Kumar, S.; Nendel, C.; O'Leary, G.; Olesen, J. E.; Osborne, T. M.; Palosuo, T.; Priesack, E.; Ripoche, D.; Semenov, M. A.; Shcherbak, I.; Steduto, P.; Stöckle, C.; Stratonovitch, P.; Streck, T.; Supit, I.; Tao, F.; Travasso, M.; Waha, K.; Wallach, D.; White, J. W.; Williams, J. R.; Wolf, J.

    2013-09-01

    Projections of climate change impacts on crop yields are inherently uncertain. Uncertainty is often quantified when projecting future greenhouse gas emissions and their influence on climate. However, multi-model uncertainty analysis of crop responses to climate change is rare because systematic and objective comparisons among process-based crop simulation models are difficult. Here we present the largest standardized model intercomparison for climate change impacts so far. We found that individual crop models are able to simulate measured wheat grain yields accurately under a range of environments, particularly if the input information is sufficient. However, simulated climate change impacts vary across models owing to differences in model structures and parameter values. A greater proportion of the uncertainty in climate change impact projections was due to variations among crop models than to variations among downscaled general circulation models. Uncertainties in simulated impacts increased with CO2 concentrations and associated warming. These impact uncertainties can be reduced by improving temperature and CO2 relationships in models and better quantified through use of multi-model ensembles. Less uncertainty in describing how climate change may affect agricultural productivity will aid adaptation strategy development andpolicymaking.

  13. UCODE_2005 and six other computer codes for universal sensitivity analysis, calibration, and uncertainty evaluation constructed using the JUPITER API

    USGS Publications Warehouse

    Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen

    2006-01-01

    This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con

  14. Uncertainty analyses of the calibrated parameter values of a water quality model

    NASA Astrophysics Data System (ADS)

    Rode, M.; Suhr, U.; Lindenschmidt, K.-E.

    2003-04-01

    For river basin management water quality models are increasingly used for the analysis and evaluation of different management measures. However substantial uncertainties exist in parameter values depending on the available calibration data. In this paper an uncertainty analysis for a water quality model is presented, which considers the impact of available model calibration data and the variance of input variables. The investigation was conducted based on four extensive flowtime related longitudinal surveys in the River Elbe in the years 1996 to 1999 with varying discharges and seasonal conditions. For the model calculations the deterministic model QSIM of the BfG (Germany) was used. QSIM is a one dimensional water quality model and uses standard algorithms for hydrodynamics and phytoplankton dynamics in running waters, e.g. Michaelis Menten/Monod kinetics, which are used in a wide range of models. The multi-objective calibration of the model was carried out with the nonlinear parameter estimator PEST. The results show that for individual flow time related measuring surveys very good agreements between model calculation and measured values can be obtained. If these parameters are applied to deviating boundary conditions, substantial errors in model calculation can occur. These uncertainties can be decreased with an increased calibration database. More reliable model parameters can be identified, which supply reasonable results for broader boundary conditions. The extension of the application of the parameter set on a wider range of water quality conditions leads to a slight reduction of the model precision for the specific water quality situation. Moreover the investigations show that highly variable water quality variables like the algal biomass always allow a smaller forecast accuracy than variables with lower coefficients of variation like e.g. nitrate.

  15. A two-stage Monte Carlo approach to the expression of uncertainty with finite sample sizes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Stephen Vernon; Moyer, Robert D.

    2005-05-01

    Proposed supplement I to the GUM outlines a 'propagation of distributions' approach to deriving the distribution of a measurand for any non-linear function and for any set of random inputs. The supplement's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case, the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximatedmore » distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper, we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals.« less

  16. Real-time flood forecasts & risk assessment using a possibility-theory based fuzzy neural network

    NASA Astrophysics Data System (ADS)

    Khan, U. T.

    2016-12-01

    Globally floods are one of the most devastating natural disasters and improved flood forecasting methods are essential for better flood protection in urban areas. Given the availability of high resolution real-time datasets for flood variables (e.g. streamflow and precipitation) in many urban areas, data-driven models have been effectively used to predict peak flow rates in river; however, the selection of input parameters for these types of models is often subjective. Additionally, the inherit uncertainty associated with data models along with errors in extreme event observations means that uncertainty quantification is essential. Addressing these concerns will enable improved flood forecasting methods and provide more accurate flood risk assessments. In this research, a new type of data-driven model, a quasi-real-time updating fuzzy neural network is developed to predict peak flow rates in urban riverine watersheds. A possibility-to-probability transformation is first used to convert observed data into fuzzy numbers. A possibility theory based training regime is them used to construct the fuzzy parameters and the outputs. A new entropy-based optimisation criterion is used to train the network. Two existing methods to select the optimum input parameters are modified to account for fuzzy number inputs, and compared. These methods are: Entropy-Wavelet-based Artificial Neural Network (EWANN) and Combined Neural Pathway Strength Analysis (CNPSA). Finally, an automated algorithm design to select the optimum structure of the neural network is implemented. The overall impact of each component of training this network is to replace the traditional ad hoc network configuration methods, with one based on objective criteria. Ten years of data from the Bow River in Calgary, Canada (including two major floods in 2005 and 2013) are used to calibrate and test the network. The EWANN method selected lagged peak flow as a candidate input, whereas the CNPSA method selected lagged precipitation and lagged mean daily flow as candidate inputs. Model performance metric show that the CNPSA method had higher performance (with an efficiency of 0.76). Model output was used to assess the risk of extreme peak flows for a given day using an inverse possibility-to-probability transformation.

  17. The Revised OB-1 Method for Metal-Water Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westfall, Robert Michael; Wright, Richard Q

    The OB-1 method for the calculation of the minimum critical mass (mcm) of fissile actinides in metal/water systems was described in a 2008 Nuclear Science and Engineering (NS&E) article. The purpose of the present work is to update and expand the application of this method with current nuclear data, including data uncertainties. The mcm and the hypothetical fissile metal density ({rho}{sub F}) in grams of metal/liter are obtained by a fit to values predicted with transport calculations. The input parameters required are thermal values for fission and absorption cross sections and nubar. A factor of ({radical}{pi})/2 is used to convertmore » to Maxwellian averaged values. The uncertainties for the fission and capture cross sections and the estimated nubar uncertainties are used to determine the uncertainties in the mcm, either in percent or grams.« less

  18. Robust fixed order dynamic compensation for large space structure control

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Byrns, Edward V., Jr.

    1989-01-01

    A simple formulation for designing fixed order dynamic compensators which are robust to both uncertainty at the plant input and structured uncertainty in the plant dynamics is presented. The emphasis is on designing low order compensators for systems of high order. The formulation is done in an output feedback setting which exploits an observer canonical form to represent the compensator dynamics. The formulation also precludes the use of direct feedback of the plant output. The main contribution lies in defining a method for penalizing the states of the plant and of the compensator, and for choosing the distribution on initial conditions so that the loop transfer matrix approximates that of a full state design. To improve robustness to parameter uncertainty, the formulation avoids the introduction of sensitivity states, which has led to complex formulations in earlier studies where only structured uncertainty has been considered.

  19. A variational approach to parameter estimation in ordinary differential equations.

    PubMed

    Kaschek, Daniel; Timmer, Jens

    2012-08-14

    Ordinary differential equations are widely-used in the field of systems biology and chemical engineering to model chemical reaction networks. Numerous techniques have been developed to estimate parameters like rate constants, initial conditions or steady state concentrations from time-resolved data. In contrast to this countable set of parameters, the estimation of entire courses of network components corresponds to an innumerable set of parameters. The approach presented in this work is able to deal with course estimation for extrinsic system inputs or intrinsic reactants, both not being constrained by the reaction network itself. Our method is based on variational calculus which is carried out analytically to derive an augmented system of differential equations including the unconstrained components as ordinary state variables. Finally, conventional parameter estimation is applied to the augmented system resulting in a combined estimation of courses and parameters. The combined estimation approach takes the uncertainty in input courses correctly into account. This leads to precise parameter estimates and correct confidence intervals. In particular this implies that small motifs of large reaction networks can be analysed independently of the rest. By the use of variational methods, elements from control theory and statistics are combined allowing for future transfer of methods between the two fields.

  20. Developments in Sensitivity Methodologies and the Validation of Reactor Physics Calculations

    DOE PAGES

    Palmiotti, Giuseppe; Salvatores, Massimo

    2012-01-01

    The sensitivity methodologies have been a remarkable story when adopted in the reactor physics field. Sensitivity coefficients can be used for different objectives like uncertainty estimates, design optimization, determination of target accuracy requirements, adjustment of input parameters, and evaluations of the representativity of an experiment with respect to a reference design configuration. A review of the methods used is provided, and several examples illustrate the success of the methodology in reactor physics. A new application as the improvement of nuclear basic parameters using integral experiments is also described.

  1. The two-dimensional Monte Carlo: a new methodologic paradigm for dose reconstruction for epidemiological studies.

    PubMed

    Simon, Steven L; Hoffman, F Owen; Hofer, Eduard

    2015-01-01

    Retrospective dose estimation, particularly dose reconstruction that supports epidemiological investigations of health risk, relies on various strategies that include models of physical processes and exposure conditions with detail ranging from simple to complex. Quantification of dose uncertainty is an essential component of assessments for health risk studies since, as is well understood, it is impossible to retrospectively determine the true dose for each person. To address uncertainty in dose estimation, numerical simulation tools have become commonplace and there is now an increased understanding about the needs and what is required for models used to estimate cohort doses (in the absence of direct measurement) to evaluate dose response. It now appears that for dose-response algorithms to derive the best, unbiased estimate of health risk, we need to understand the type, magnitude and interrelationships of the uncertainties of model assumptions, parameters and input data used in the associated dose estimation models. Heretofore, uncertainty analysis of dose estimates did not always properly distinguish between categories of errors, e.g., uncertainty that is specific to each subject (i.e., unshared error), and uncertainty of doses from a lack of understanding and knowledge about parameter values that are shared to varying degrees by numbers of subsets of the cohort. While mathematical propagation of errors by Monte Carlo simulation methods has been used for years to estimate the uncertainty of an individual subject's dose, it was almost always conducted without consideration of dependencies between subjects. In retrospect, these types of simple analyses are not suitable for studies with complex dose models, particularly when important input data are missing or otherwise not available. The dose estimation strategy presented here is a simulation method that corrects the previous deficiencies of analytical or simple Monte Carlo error propagation methods and is termed, due to its capability to maintain separation between shared and unshared errors, the two-dimensional Monte Carlo (2DMC) procedure. Simply put, the 2DMC method simulates alternative, possibly true, sets (or vectors) of doses for an entire cohort rather than a single set that emerges when each individual's dose is estimated independently from other subjects. Moreover, estimated doses within each simulated vector maintain proper inter-relationships such that the estimated doses for members of a cohort subgroup that share common lifestyle attributes and sources of uncertainty are properly correlated. The 2DMC procedure simulates inter-individual variability of possibly true doses within each dose vector and captures the influence of uncertainty in the values of dosimetric parameters across multiple realizations of possibly true vectors of cohort doses. The primary characteristic of the 2DMC approach, as well as its strength, are defined by the proper separation between uncertainties shared by members of the entire cohort or members of defined cohort subsets, and uncertainties that are individual-specific and therefore unshared.

  2. Assessment of parameter uncertainty in hydrological model using a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis method

    NASA Astrophysics Data System (ADS)

    Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming

    2016-07-01

    Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model's capability for simulating/predicting water resources.

  3. Stochastic reduced order models for inverse problems under uncertainty

    PubMed Central

    Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.

    2014-01-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115

  4. Estimating the volume of supra-glacial melt lakes across Greenland: A study of uncertainties derived from multi-platform water-reflectance models

    NASA Astrophysics Data System (ADS)

    Cordero-Llana, L.; Selmes, N.; Murray, T.; Scharrer, K.; Booth, A. D.

    2012-12-01

    Large volumes of water are necessary to propagate cracks to the glacial bed via hydrofractures. Hydrological models have shown that lakes above a critical volume can supply the necessary water for this process, so the ability to measure water depth in lakes remotely is important to study these processes. Previously, water depth has been derived from the optical properties of water using data from high resolution optical satellite images, as such ASTER, (Advanced Spaceborne Thermal Emission and Reflection Radiometer), IKONOS and LANDSAT. These studies used water-reflectance models based on the Bouguer-Lambert-Beer law and lack any estimation of model uncertainties. We propose an optimized model based on Sneed and Hamilton's (2007) approach to estimate water depths in supraglacial lakes and undertake a robust analysis of the errors for the first time. We used atmospherically-corrected data from ASTER and MODIS data as an input to the water-reflectance model. Three physical parameters are needed: namely bed albedo, water attenuation coefficient and reflectance of optically-deep water. These parameters were derived for each wavelength using standard calibrations. As a reference dataset, we obtained lake geometries using ICESat measurements over empty lakes. Differences between modeled and reference depths are used in a minimization model to obtain parameters for the water-reflectance model, yielding optimized lake depth estimates. Our key contribution is the development of a Monte Carlo simulation to run the water-reflectance model, which allows us to quantify the uncertainties in water depth and hence water volume. This robust statistical analysis provides better understanding of the sensitivity of the water-reflectance model to the choice of input parameters, which should contribute to the understanding of the influence of surface-derived melt-water on ice sheet dynamics. Sneed, W.A. and Hamilton, G.S., 2007: Evolution of melt pond volume on the surface of the Greenland Ice Sheet. Geophysical Research Letters, 34, 1-4.

  5. Sensitivity of Asteroid Impact Risk to Uncertainty in Asteroid Properties and Entry Parameters

    NASA Astrophysics Data System (ADS)

    Wheeler, Lorien; Mathias, Donovan; Dotson, Jessie L.; NASA Asteroid Threat Assessment Project

    2017-10-01

    A central challenge in assessing the threat posed by asteroids striking Earth is the large amount of uncertainty inherent throughout all aspects of the problem. Many asteroid properties are not well characterized and can range widely from strong, dense, monolithic irons to loosely bound, highly porous rubble piles. Even for an object of known properties, the specific entry velocity, angle, and impact location can swing the potential consequence from no damage to causing millions of casualties. Due to the extreme rarity of large asteroid strikes, there are also large uncertainties in how different types of asteroids will interact with the atmosphere during entry, how readily they may break up or ablate, and how much surface damage will be caused by the resulting airbursts or impacts.In this work, we use our Probabilistic Asteroid Impact Risk (PAIR) model to investigate the sensitivity of asteroid impact damage to uncertainties in key asteroid properties, entry parameters, or modeling assumptions. The PAIR model combines physics-based analytic models of asteroid entry and damage in a probabilistic Monte Carlo framework to assess the risk posed by a wide range of potential impacts. The model samples from uncertainty distributions of asteroid properties and entry parameters to generate millions of specific impact cases, and models the atmospheric entry and damage for each case, including blast overpressure, thermal radiation, tsunami inundation, and global effects. To assess the risk sensitivity, we alternately fix and vary the different input parameters and compare the effect on the resulting range of damage produced. The goal of these studies is to help guide future efforts in asteroid characterization and model refinement by determining which properties most significantly affect the potential risk.

  6. Long-term hydrology and water quality of a drained pine plantation in North Carolina

    Treesearch

    D.M. Amatya; R.W. Skaggs

    2011-01-01

    Long-term data provide a basis for understanding natural variability, reducing uncertainty in model inputs and parameter estimation, and developing new hypotheses. This article evaluates 21 years (1988-2008) of hydrologic data and 17 years (1988-2005) of water quality data from a drained pine plantation in eastern North Carolina. The plantation age was 14 years at the...

  7. Monte-Carlo-based uncertainty propagation with hierarchical models—a case study in dynamic torque

    NASA Astrophysics Data System (ADS)

    Klaus, Leonard; Eichstädt, Sascha

    2018-04-01

    For a dynamic calibration, a torque transducer is described by a mechanical model, and the corresponding model parameters are to be identified from measurement data. A measuring device for the primary calibration of dynamic torque, and a corresponding model-based calibration approach, have recently been developed at PTB. The complete mechanical model of the calibration set-up is very complex, and involves several calibration steps—making a straightforward implementation of a Monte Carlo uncertainty evaluation tedious. With this in mind, we here propose to separate the complete model into sub-models, with each sub-model being treated with individual experiments and analysis. The uncertainty evaluation for the overall model then has to combine the information from the sub-models in line with Supplement 2 of the Guide to the Expression of Uncertainty in Measurement. In this contribution, we demonstrate how to carry this out using the Monte Carlo method. The uncertainty evaluation involves various input quantities of different origin and the solution of a numerical optimisation problem.

  8. An integrated uncertainty analysis and data assimilation approach for improved streamflow predictions

    NASA Astrophysics Data System (ADS)

    Hogue, T. S.; He, M.; Franz, K. J.; Margulis, S. A.; Vrugt, J. A.

    2010-12-01

    The current study presents an integrated uncertainty analysis and data assimilation approach to improve streamflow predictions while simultaneously providing meaningful estimates of the associated uncertainty. Study models include the National Weather Service (NWS) operational snow model (SNOW17) and rainfall-runoff model (SAC-SMA). The proposed approach uses the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) to simultaneously estimate uncertainties in model parameters, forcing, and observations. An ensemble Kalman filter (EnKF) is configured with the DREAM-identified uncertainty structure and applied to assimilating snow water equivalent data into the SNOW17 model for improved snowmelt simulations. Snowmelt estimates then serves as an input to the SAC-SMA model to provide streamflow predictions at the basin outlet. The robustness and usefulness of the approach is evaluated for a snow-dominated watershed in the northern Sierra Mountains. This presentation describes the implementation of DREAM and EnKF into the coupled SNOW17 and SAC-SMA models and summarizes study results and findings.

  9. Quantifying geological uncertainty for flow and transport modeling in multi-modal heterogeneous formations

    NASA Astrophysics Data System (ADS)

    Feyen, Luc; Caers, Jef

    2006-06-01

    In this work, we address the problem of characterizing the heterogeneity and uncertainty of hydraulic properties for complex geological settings. Hereby, we distinguish between two scales of heterogeneity, namely the hydrofacies structure and the intrafacies variability of the hydraulic properties. We employ multiple-point geostatistics to characterize the hydrofacies architecture. The multiple-point statistics are borrowed from a training image that is designed to reflect the prior geological conceptualization. The intrafacies variability of the hydraulic properties is represented using conventional two-point correlation methods, more precisely, spatial covariance models under a multi-Gaussian spatial law. We address the different levels and sources of uncertainty in characterizing the subsurface heterogeneity, and explore their effect on groundwater flow and transport predictions. Typically, uncertainty is assessed by way of many images, termed realizations, of a fixed statistical model. However, in many cases, sampling from a fixed stochastic model does not adequately represent the space of uncertainty. It neglects the uncertainty related to the selection of the stochastic model and the estimation of its input parameters. We acknowledge the uncertainty inherent in the definition of the prior conceptual model of aquifer architecture and in the estimation of global statistics, anisotropy, and correlation scales. Spatial bootstrap is used to assess the uncertainty of the unknown statistical parameters. As an illustrative example, we employ a synthetic field that represents a fluvial setting consisting of an interconnected network of channel sands embedded within finer-grained floodplain material. For this highly non-stationary setting we quantify the groundwater flow and transport model prediction uncertainty for various levels of hydrogeological uncertainty. Results indicate the importance of accurately describing the facies geometry, especially for transport predictions.

  10. Uncertainty assessment of a model for biological nitrogen and phosphorus removal: Application to a large wastewater treatment plant

    NASA Astrophysics Data System (ADS)

    Mannina, Giorgio; Cosenza, Alida; Viviani, Gaspare

    In the last few years, the use of mathematical models in WasteWater Treatment Plant (WWTP) processes has become a common way to predict WWTP behaviour. However, mathematical models generally demand advanced input for their implementation that must be evaluated by an extensive data-gathering campaign, which cannot always be carried out. This fact, together with the intrinsic complexity of the model structure, leads to model results that may be very uncertain. Quantification of the uncertainty is imperative. However, despite the importance of uncertainty quantification, only few studies have been carried out in the wastewater treatment field, and those studies only included a few of the sources of model uncertainty. Seeking the development of the area, the paper presents the uncertainty assessment of a mathematical model simulating biological nitrogen and phosphorus removal. The uncertainty assessment was conducted according to the Generalised Likelihood Uncertainty Estimation (GLUE) methodology that has been scarcely applied in wastewater field. The model was based on activated-sludge models 1 (ASM) and 2 (ASM2). Different approaches can be used for uncertainty analysis. The GLUE methodology requires a large number of Monte Carlo simulations in which a random sampling of individual parameters drawn from probability distributions is used to determine a set of parameter values. Using this approach, model reliability was evaluated based on its capacity to globally limit the uncertainty. The method was applied to a large full-scale WWTP for which quantity and quality data was gathered. The analysis enabled to gain useful insights for WWTP modelling identifying the crucial aspects where higher uncertainty rely and where therefore, more efforts should be provided in terms of both data gathering and modelling practises.

  11. Understanding Climate Uncertainty with an Ocean Focus

    NASA Astrophysics Data System (ADS)

    Tokmakian, R. T.

    2009-12-01

    Uncertainty in climate simulations arises from various aspects of the end-to-end process of modeling the Earth’s climate. First, there is uncertainty from the structure of the climate model components (e.g. ocean/ice/atmosphere). Even the most complex models are deficient, not only in the complexity of the processes they represent, but in which processes are included in a particular model. Next, uncertainties arise from the inherent error in the initial and boundary conditions of a simulation. Initial conditions are the state of the weather or climate at the beginning of the simulation and other such things, and typically come from observations. Finally, there is the uncertainty associated with the values of parameters in the model. These parameters may represent physical constants or effects, such as ocean mixing, or non-physical aspects of modeling and computation. The uncertainty in these input parameters propagates through the non-linear model to give uncertainty in the outputs. The models in 2020 will no doubt be better than today’s models, but they will still be imperfect, and development of uncertainty analysis technology is a critical aspect of understanding model realism and prediction capability. Smith [2002] and Cox and Stephenson [2007] discuss the need for methods to quantify the uncertainties within complicated systems so that limitations or weaknesses of the climate model can be understood. In making climate predictions, we need to have available both the most reliable model or simulation and a methods to quantify the reliability of a simulation. If quantitative uncertainty questions of the internal model dynamics are to be answered with complex simulations such as AOGCMs, then the only known path forward is based on model ensembles that characterize behavior with alternative parameter settings [e.g. Rougier, 2007]. The relevance and feasibility of using "Statistical Analysis of Computer Code Output" (SACCO) methods for examining uncertainty in ocean circulation due to parameter specification will be described and early results using the ocean/ice components of the CCSM climate model in a designed experiment framework will be shown. Cox, P. and D. Stephenson, Climate Change: A Changing Climate for Prediction, 2007, Science 317 (5835), 207, DOI: 10.1126/science.1145956. Rougier, J. C., 2007: Probabilistic Inference for Future Climate Using an Ensemble of Climate Model Evaluations, Climatic Change, 81, 247-264. Smith L., 2002, What might we learn from climate forecasts? Proc. Nat’l Academy of Sciences, Vol. 99, suppl. 1, 2487-2492 doi:10.1073/pnas.012580599.

  12. Enhancing PTFs with remotely sensed data for multi-scale soil water retention estimation

    NASA Astrophysics Data System (ADS)

    Jana, Raghavendra B.; Mohanty, Binayak P.

    2011-03-01

    SummaryUse of remotely sensed data products in the earth science and water resources fields is growing due to increasingly easy availability of the data. Traditionally, pedotransfer functions (PTFs) employed for soil hydraulic parameter estimation from other easily available data have used basic soil texture and structure information as inputs. Inclusion of surrogate/supplementary data such as topography and vegetation information has shown some improvement in the PTF's ability to estimate more accurate soil hydraulic parameters. Artificial neural networks (ANNs) are a popular tool for PTF development, and are usually applied across matching spatial scales of inputs and outputs. However, different hydrologic, hydro-climatic, and contaminant transport models require input data at different scales, all of which may not be easily available from existing databases. In such a scenario, it becomes necessary to scale the soil hydraulic parameter values estimated by PTFs to suit the model requirements. Also, uncertainties in the predictions need to be quantified to enable users to gauge the suitability of a particular dataset in their applications. Bayesian Neural Networks (BNNs) inherently provide uncertainty estimates for their outputs due to their utilization of Markov Chain Monte Carlo (MCMC) techniques. In this paper, we present a PTF methodology to estimate soil water retention characteristics built on a Bayesian framework for training of neural networks and utilizing several in situ and remotely sensed datasets jointly. The BNN is also applied across spatial scales to provide fine scale outputs when trained with coarse scale data. Our training data inputs include ground/remotely sensed soil texture, bulk density, elevation, and Leaf Area Index (LAI) at 1 km resolutions, while similar properties measured at a point scale are used as fine scale inputs. The methodology was tested at two different hydro-climatic regions. We also tested the effect of varying the support scale of the training data for the BNNs by sequentially aggregating finer resolution training data to coarser resolutions, and the applicability of the technique to upscaling problems. The BNN outputs are corrected for bias using a non-linear CDF-matching technique. Final results show good promise of the suitability of this Bayesian Neural Network approach for soil hydraulic parameter estimation across spatial scales using ground-, air-, or space-based remotely sensed geophysical parameters. Inclusion of remotely sensed data such as elevation and LAI in addition to in situ soil physical properties improved the estimation capabilities of the BNN-based PTF in certain conditions.

  13. Uncertainty analysis of vegetation distribution in the northern high latitudes during the 21st century with a dynamic vegetation model.

    PubMed

    Jiang, Yueyang; Zhuang, Qianlai; Schaphoff, Sibyll; Sitch, Stephen; Sokolov, Andrei; Kicklighter, David; Melillo, Jerry

    2012-03-01

    This study aims to assess how high-latitude vegetation may respond under various climate scenarios during the 21st century with a focus on analyzing model parameters induced uncertainty and how this uncertainty compares to the uncertainty induced by various climates. The analysis was based on a set of 10,000 Monte Carlo ensemble Lund-Potsdam-Jena (LPJ) simulations for the northern high latitudes (45(o)N and polewards) for the period 1900-2100. The LPJ Dynamic Global Vegetation Model (LPJ-DGVM) was run under contemporary and future climates from four Special Report Emission Scenarios (SRES), A1FI, A2, B1, and B2, based on the Hadley Centre General Circulation Model (GCM), and six climate scenarios, X901M, X902L, X903H, X904M, X905L, and X906H from the Integrated Global System Model (IGSM) at the Massachusetts Institute of Technology (MIT). In the current dynamic vegetation model, some parameters are more important than others in determining the vegetation distribution. Parameters that control plant carbon uptake and light-use efficiency have the predominant influence on the vegetation distribution of both woody and herbaceous plant functional types. The relative importance of different parameters varies temporally and spatially and is influenced by climate inputs. In addition to climate, these parameters play an important role in determining the vegetation distribution in the region. The parameter-based uncertainties contribute most to the total uncertainty. The current warming conditions lead to a complexity of vegetation responses in the region. Temperate trees will be more sensitive to climate variability, compared with boreal forest trees and C3 perennial grasses. This sensitivity would result in a unanimous northward greenness migration due to anomalous warming in the northern high latitudes. Temporally, boreal needleleaved evergreen plants are projected to decline considerably, and a large portion of C3 perennial grass is projected to disappear by the end of the 21st century. In contrast, the area of temperate trees would increase, especially under the most extreme A1FI scenario. As the warming continues, the northward greenness expansion in the Arctic region could continue.

  14. Uncertainty and variability in computational and mathematical models of cardiac physiology.

    PubMed

    Mirams, Gary R; Pathmanathan, Pras; Gray, Richard A; Challenor, Peter; Clayton, Richard H

    2016-12-01

    Mathematical and computational models of cardiac physiology have been an integral component of cardiac electrophysiology since its inception, and are collectively known as the Cardiac Physiome. We identify and classify the numerous sources of variability and uncertainty in model formulation, parameters and other inputs that arise from both natural variation in experimental data and lack of knowledge. The impact of uncertainty on the outputs of Cardiac Physiome models is not well understood, and this limits their utility as clinical tools. We argue that incorporating variability and uncertainty should be a high priority for the future of the Cardiac Physiome. We suggest investigating the adoption of approaches developed in other areas of science and engineering while recognising unique challenges for the Cardiac Physiome; it is likely that novel methods will be necessary that require engagement with the mathematics and statistics community. The Cardiac Physiome effort is one of the most mature and successful applications of mathematical and computational modelling for describing and advancing the understanding of physiology. After five decades of development, physiological cardiac models are poised to realise the promise of translational research via clinical applications such as drug development and patient-specific approaches as well as ablation, cardiac resynchronisation and contractility modulation therapies. For models to be included as a vital component of the decision process in safety-critical applications, rigorous assessment of model credibility will be required. This White Paper describes one aspect of this process by identifying and classifying sources of variability and uncertainty in models as well as their implications for the application and development of cardiac models. We stress the need to understand and quantify the sources of variability and uncertainty in model inputs, and the impact of model structure and complexity and their consequences for predictive model outputs. We propose that the future of the Cardiac Physiome should include a probabilistic approach to quantify the relationship of variability and uncertainty of model inputs and outputs. © 2016 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.

  15. Impact of Measurement Uncertainties on Receptor Modeling of Speciated Atmospheric Mercury.

    PubMed

    Cheng, I; Zhang, L; Xu, X

    2016-02-09

    Gaseous oxidized mercury (GOM) and particle-bound mercury (PBM) measurement uncertainties could potentially affect the analysis and modeling of atmospheric mercury. This study investigated the impact of GOM measurement uncertainties on Principal Components Analysis (PCA), Absolute Principal Component Scores (APCS), and Concentration-Weighted Trajectory (CWT) receptor modeling results. The atmospheric mercury data input into these receptor models were modified by combining GOM and PBM into a single reactive mercury (RM) parameter and excluding low GOM measurements to improve the data quality. PCA and APCS results derived from RM or excluding low GOM measurements were similar to those in previous studies, except for a non-unique component and an additional component extracted from the RM dataset. The percent variance explained by the major components from a previous study differed slightly compared to RM and excluding low GOM measurements. CWT results were more sensitive to the input of RM than GOM excluding low measurements. Larger discrepancies were found between RM and GOM source regions than those between RM and PBM. Depending on the season, CWT source regions of RM differed by 40-61% compared to GOM from a previous study. No improvement in correlations between CWT results and anthropogenic mercury emissions were found.

  16. Impact of Measurement Uncertainties on Receptor Modeling of Speciated Atmospheric Mercury

    PubMed Central

    Cheng, I.; Zhang, L.; Xu, X.

    2016-01-01

    Gaseous oxidized mercury (GOM) and particle-bound mercury (PBM) measurement uncertainties could potentially affect the analysis and modeling of atmospheric mercury. This study investigated the impact of GOM measurement uncertainties on Principal Components Analysis (PCA), Absolute Principal Component Scores (APCS), and Concentration-Weighted Trajectory (CWT) receptor modeling results. The atmospheric mercury data input into these receptor models were modified by combining GOM and PBM into a single reactive mercury (RM) parameter and excluding low GOM measurements to improve the data quality. PCA and APCS results derived from RM or excluding low GOM measurements were similar to those in previous studies, except for a non-unique component and an additional component extracted from the RM dataset. The percent variance explained by the major components from a previous study differed slightly compared to RM and excluding low GOM measurements. CWT results were more sensitive to the input of RM than GOM excluding low measurements. Larger discrepancies were found between RM and GOM source regions than those between RM and PBM. Depending on the season, CWT source regions of RM differed by 40–61% compared to GOM from a previous study. No improvement in correlations between CWT results and anthropogenic mercury emissions were found. PMID:26857835

  17. Assessment of Spatial Transferability of Process-Based Hydrological Model Parameters in Two Neighboring Catchments in the Himalayan Region

    NASA Astrophysics Data System (ADS)

    Nepal, S.

    2016-12-01

    The spatial transferability of the model parameters of the process-oriented distributed J2000 hydrological model was investigated in two glaciated sub-catchments of the Koshi river basin in eastern Nepal. The basins had a high degree of similarity with respect to their static landscape features. The model was first calibrated (1986-1991) and validated (1992-1997) in the Dudh Koshi sub-catchment. The calibrated and validated model parameters were then transferred to the nearby Tamor catchment (2001-2009). A sensitivity and uncertainty analysis was carried out for both sub-catchments to discover the sensitivity range of the parameters in the two catchments. The model represented the overall hydrograph well in both sub-catchments, including baseflow and medium range flows (rising and recession limbs). The efficiency results according to both Nash-Sutcliffe and the coefficient of determination was above 0.84 in both cases. The sensitivity analysis showed that the same parameter was most sensitive for Nash-Sutcliffe (ENS) and Log Nash-Sutcliffe (LNS) efficiencies in both catchments. However, there were some differences in sensitivity to ENS and LNS for moderate and low sensitive parameters, although the majority (13 out of 16 for ENS and 16 out of 16 for LNS) had a sensitivity response in a similar range. A generalized likelihood uncertainty estimation (GLUE) result suggest that most of the time the observed runoff is within the parameter uncertainty range, although occasionally the values lie outside the uncertainty range, especially during flood peaks and more in the Tamor. This may be due to the limited input data resulting from the small number of precipitation stations and lack of representative stations in high-altitude areas, as well as to model structural uncertainty. The results indicate that transfer of the J2000 parameters to a neighboring catchment in the Himalayan region with similar physiographic landscape characteristics is viable. This indicates the possibility of applying process-based J2000 model be to the ungauged catchments in the Himalayan region, which could provide important insights into the hydrological system dynamics and provide much needed information to support water resources planning and management.

  18. A Formal Approach to Empirical Dynamic Model Optimization and Validation

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.

  19. Semi-globally input-to-state stable controller design for flexible spacecraft attitude stabilization under bounded disturbances

    NASA Astrophysics Data System (ADS)

    Hu, Qinglei

    2010-02-01

    Semi-globally input-to-state stable (ISS) control law is derived for flexible spacecraft attitude maneuvers in the presence of parameter uncertainties and external disturbances. The modified rodrigues parameters (MRP) are used as the kinematic variables since they are nonsingular for all possible rotations. This novel simple control is a proportional-plus-derivative (PD) type controller plus a sign function through a special Lyapunov function construction involving the sum of quadratic terms in the angular velocities, kinematic parameters, modal variables and the cross state weighting. A sufficient condition under which this nonlinear PD-type control law can render the system semi-globally input-to-state stable is provided such that the closed-loop system is robust with respect to any disturbance within a quantifiable restriction on the amplitude, as well as the set of initial conditions, if the control gains are designed appropriately. In addition to detailed derivations of the new controllers design and a rigorous sketch of all the associated stability and attitude convergence proofs, extensive simulation studies have been conducted to validate the design and the results are presented to highlight the ensuring closed-loop performance benefits when compared with the conventional control schemes.

  20. An Integrated Uncertainty Analysis and Ensemble-based Data Assimilation Framework for Operational Snow Predictions

    NASA Astrophysics Data System (ADS)

    He, M.; Hogue, T. S.; Franz, K.; Margulis, S. A.; Vrugt, J. A.

    2009-12-01

    The National Weather Service (NWS), the agency responsible for short- and long-term streamflow predictions across the nation, primarily applies the SNOW17 model for operational forecasting of snow accumulation and melt. The SNOW17-forecasted snowmelt serves as an input to a rainfall-runoff model for streamflow forecasts in snow-dominated areas. The accuracy of streamflow predictions in these areas largely relies on the accuracy of snowmelt. However, no direct snowmelt measurements are available to validate the SNOW17 predictions. Instead, indirect measurements such as snow water equivalent (SWE) measurements or discharge are typically used to calibrate SNOW17 parameters. In addition, the forecast practice is inherently deterministic, lacking tools to systematically address forecasting uncertainties (e.g., uncertainties in parameters, forcing, SWE and discharge observations, etc.). The current research presents an Integrated Uncertainty analysis and Ensemble-based data Assimilation (IUEA) framework to improve predictions of snowmelt and discharge while simultaneously providing meaningful estimates of the associated uncertainty. The IUEA approach uses the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) to simultaneously estimate uncertainties in model parameters, forcing, and observations. The robustness and usefulness of the IUEA-SNOW17 framework is evaluated for snow-dominated watersheds in the northern Sierra Mountains, using the coupled IUEA-SNOW17 and an operational soil moisture accounting model (SAC-SMA). Preliminary results are promising and indicate successful performance of the coupled IUEA-SNOW17 framework. Implementation of the SNOW17 with the IUEA is straightforward and requires no major modification to the SNOW17 model structure. The IUEA-SNOW17 framework is intended to be modular and transferable and should assist the NWS in advancing the current forecasting system and reinforcing current operational forecasting skill.

  1. Correlated uncertainties in Monte Carlo reaction rate calculations

    NASA Astrophysics Data System (ADS)

    Longland, Richard

    2017-07-01

    Context. Monte Carlo methods have enabled nuclear reaction rates from uncertain inputs to be presented in a statistically meaningful manner. However, these uncertainties are currently computed assuming no correlations between the physical quantities that enter those calculations. This is not always an appropriate assumption. Astrophysically important reactions are often dominated by resonances, whose properties are normalized to a well-known reference resonance. This insight provides a basis from which to develop a flexible framework for including correlations in Monte Carlo reaction rate calculations. Aims: The aim of this work is to develop and test a method for including correlations in Monte Carlo reaction rate calculations when the input has been normalized to a common reference. Methods: A mathematical framework is developed for including correlations between input parameters in Monte Carlo reaction rate calculations. The magnitude of those correlations is calculated from the uncertainties typically reported in experimental papers, where full correlation information is not available. The method is applied to four illustrative examples: a fictional 3-resonance reaction, 27Al(p, γ)28Si, 23Na(p, α)20Ne, and 23Na(α, p)26Mg. Results: Reaction rates at low temperatures that are dominated by a few isolated resonances are found to minimally impacted by correlation effects. However, reaction rates determined from many overlapping resonances can be significantly affected. Uncertainties in the 23Na(α, p)26Mg reaction, for example, increase by up to a factor of 5. This highlights the need to take correlation effects into account in reaction rate calculations, and provides insight into which cases are expected to be most affected by them. The impact of correlation effects on nucleosynthesis is also investigated.

  2. Validating an Air Traffic Management Concept of Operation Using Statistical Modeling

    NASA Technical Reports Server (NTRS)

    He, Yuning; Davies, Misty Dawn

    2013-01-01

    Validating a concept of operation for a complex, safety-critical system (like the National Airspace System) is challenging because of the high dimensionality of the controllable parameters and the infinite number of states of the system. In this paper, we use statistical modeling techniques to explore the behavior of a conflict detection and resolution algorithm designed for the terminal airspace. These techniques predict the robustness of the system simulation to both nominal and off-nominal behaviors within the overall airspace. They also can be used to evaluate the output of the simulation against recorded airspace data. Additionally, the techniques carry with them a mathematical value of the worth of each prediction-a statistical uncertainty for any robustness estimate. Uncertainty Quantification (UQ) is the process of quantitative characterization and ultimately a reduction of uncertainties in complex systems. UQ is important for understanding the influence of uncertainties on the behavior of a system and therefore is valuable for design, analysis, and verification and validation. In this paper, we apply advanced statistical modeling methodologies and techniques on an advanced air traffic management system, namely the Terminal Tactical Separation Assured Flight Environment (T-TSAFE). We show initial results for a parameter analysis and safety boundary (envelope) detection in the high-dimensional parameter space. For our boundary analysis, we developed a new sequential approach based upon the design of computer experiments, allowing us to incorporate knowledge from domain experts into our modeling and to determine the most likely boundary shapes and its parameters. We carried out the analysis on system parameters and describe an initial approach that will allow us to include time-series inputs, such as the radar track data, into the analysis

  3. Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations

    NASA Astrophysics Data System (ADS)

    Bang, Youngsuk

    Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel hybrid ROM algorithms which can be readily integrated into existing methods and offer higher computational efficiency and defendable accuracy of the reduced models. For example, the snapshots ROM algorithm is hybridized with the range finding algorithm to render reduction in the state space, e.g. the flux in reactor calculations. In another implementation, the perturbation theory used to calculate first order derivatives of responses with respect to parameters is hybridized with a forward sensitivity analysis approach to render reduction in the parameter space. Reduction at the state and parameter spaces can be combined to render further reduction at the interface between different physics codes in a multi-physics model with the accuracy quantified in a similar manner to the single physics case. Although the proposed algorithms are generic in nature, we focus here on radiation transport models used in support of the design and analysis of nuclear reactor cores. In particular, we focus on replacing the traditional assembly calculations by ROM models to facilitate the generation of homogenized cross-sections for downstream core calculations. The implication is that assembly calculations could be done instantaneously therefore precluding the need for the expensive evaluation of the few-group cross-sections for all possible core conditions. Given the generic natures of the algorithms, we make an effort to introduce the material in a general form to allow non-nuclear engineers to benefit from this work.

  4. Takagi-Sugeno fuzzy model based robust dissipative control for uncertain flexible spacecraft with saturated time-delay input.

    PubMed

    Xu, Shidong; Sun, Guanghui; Sun, Weichao

    2017-01-01

    In this paper, the problem of robust dissipative control is investigated for uncertain flexible spacecraft based on Takagi-Sugeno (T-S) fuzzy model with saturated time-delay input. Different from most existing strategies, T-S fuzzy approximation approach is used to model the nonlinear dynamics of flexible spacecraft. Simultaneously, the physical constraints of system, like input delay, input saturation, and parameter uncertainties, are also taken care of in the fuzzy model. By employing Lyapunov-Krasovskii method and convex optimization technique, a novel robust controller is proposed to implement rest-to-rest attitude maneuver for flexible spacecraft, and the guaranteed dissipative performance enables the uncertain closed-loop system to reject the influence of elastic vibrations and external disturbances. Finally, an illustrative design example integrated with simulation results are provided to confirm the applicability and merits of the developed control strategy. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardin, Ernest; Hadgu, Teklu; Greenberg, Harris

    This report is one follow-on to a study of reference geologic disposal design concepts (Hardin et al. 2011a). Based on an analysis of maximum temperatures, that study concluded that certain disposal concepts would require extended decay storage prior to emplacement, or the use of small waste packages, or both. The study used nominal values for thermal properties of host geologic media and engineered materials, demonstrating the need for uncertainty analysis to support the conclusions. This report is a first step that identifies the input parameters of the maximum temperature calculation, surveys published data on measured values, uses an analytical approachmore » to determine which parameters are most important, and performs an example sensitivity analysis. Using results from this first step, temperature calculations planned for FY12 can focus on only the important parameters, and can use the uncertainty ranges reported here. The survey of published information on thermal properties of geologic media and engineered materials, is intended to be sufficient for use in generic calculations to evaluate the feasibility of reference disposal concepts. A full compendium of literature data is beyond the scope of this report. The term “uncertainty” is used here to represent both measurement uncertainty and spatial variability, or variability across host geologic units. For the most important parameters (e.g., buffer thermal conductivity) the extent of literature data surveyed samples these different forms of uncertainty and variability. Finally, this report is intended to be one chapter or section of a larger FY12 deliverable summarizing all the work on design concepts and thermal load management for geologic disposal (M3FT-12SN0804032, due 15Aug2012).« less

  6. OPEN CLUSTERS AS PROBES OF THE GALACTIC MAGNETIC FIELD. I. CLUSTER PROPERTIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoq, Sadia; Clemens, D. P., E-mail: shoq@bu.edu, E-mail: clemens@bu.edu

    2015-10-15

    Stars in open clusters are powerful probes of the intervening Galactic magnetic field via background starlight polarimetry because they provide constraints on the magnetic field distances. We use 2MASS photometric data for a sample of 31 clusters in the outer Galaxy for which near-IR polarimetric data were obtained to determine the cluster distances, ages, and reddenings via fitting theoretical isochrones to cluster color–magnitude diagrams. The fitting approach uses an objective χ{sup 2} minimization technique to derive the cluster properties and their uncertainties. We found the ages, distances, and reddenings for 24 of the clusters, and the distances and reddenings formore » 6 additional clusters that were either sparse or faint in the near-IR. The derived ranges of log(age), distance, and E(B−V) were 7.25–9.63, ∼670–6160 pc, and 0.02–1.46 mag, respectively. The distance uncertainties ranged from ∼8% to 20%. The derived parameters were compared to previous studies, and most cluster parameters agree within our uncertainties. To test the accuracy of the fitting technique, synthetic clusters with 50, 100, or 200 cluster members and a wide range of ages were fit. These tests recovered the input parameters within their uncertainties for more than 90% of the individual synthetic cluster parameters. These results indicate that the fitting technique likely provides reliable estimates of cluster properties. The distances derived will be used in an upcoming study of the Galactic magnetic field in the outer Galaxy.« less

  7. Non-intrusive uncertainty quantification of computational fluid dynamics simulations: notes on the accuracy and efficiency

    NASA Astrophysics Data System (ADS)

    Zimoń, Małgorzata; Sawko, Robert; Emerson, David; Thompson, Christopher

    2017-11-01

    Uncertainty quantification (UQ) is increasingly becoming an indispensable tool for assessing the reliability of computational modelling. Efficient handling of stochastic inputs, such as boundary conditions, physical properties or geometry, increases the utility of model results significantly. We discuss the application of non-intrusive generalised polynomial chaos techniques in the context of fluid engineering simulations. Deterministic and Monte Carlo integration rules are applied to a set of problems, including ordinary differential equations and the computation of aerodynamic parameters subject to random perturbations. In particular, we analyse acoustic wave propagation in a heterogeneous medium to study the effects of mesh resolution, transients, number and variability of stochastic inputs. We consider variants of multi-level Monte Carlo and perform a novel comparison of the methods with respect to numerical and parametric errors, as well as computational cost. The results provide a comprehensive view of the necessary steps in UQ analysis and demonstrate some key features of stochastic fluid flow systems.

  8. Computational system identification of continuous-time nonlinear systems using approximate Bayesian computation

    NASA Astrophysics Data System (ADS)

    Krishnanathan, Kirubhakaran; Anderson, Sean R.; Billings, Stephen A.; Kadirkamanathan, Visakan

    2016-11-01

    In this paper, we derive a system identification framework for continuous-time nonlinear systems, for the first time using a simulation-focused computational Bayesian approach. Simulation approaches to nonlinear system identification have been shown to outperform regression methods under certain conditions, such as non-persistently exciting inputs and fast-sampling. We use the approximate Bayesian computation (ABC) algorithm to perform simulation-based inference of model parameters. The framework has the following main advantages: (1) parameter distributions are intrinsically generated, giving the user a clear description of uncertainty, (2) the simulation approach avoids the difficult problem of estimating signal derivatives as is common with other continuous-time methods, and (3) as noted above, the simulation approach improves identification under conditions of non-persistently exciting inputs and fast-sampling. Term selection is performed by judging parameter significance using parameter distributions that are intrinsically generated as part of the ABC procedure. The results from a numerical example demonstrate that the method performs well in noisy scenarios, especially in comparison to competing techniques that rely on signal derivative estimation.

  9. Probabilistic margin evaluation on accidental transients for the ASTRID reactor project

    NASA Astrophysics Data System (ADS)

    Marquès, Michel

    2014-06-01

    ASTRID is a technological demonstrator of Sodium cooled Fast Reactor (SFR) under development. The conceptual design studies are being conducted in accordance with the Generation IV reactor objectives, particularly in terms of improving safety. For the hypothetical events, belonging to the accidental category "severe accident prevention situations" having a very low frequency of occurrence, the safety demonstration is no more based on a deterministic demonstration with conservative assumptions on models and parameters but on a "Best-Estimate Plus Uncertainty" (BEPU) approach. This BEPU approach ispresented in this paper for an Unprotected Loss-of-Flow (ULOF) event. The Best-Estimate (BE) analysis of this ULOFt ransient is performed with the CATHARE2 code, which is the French reference system code for SFR applications. The objective of the BEPU analysis is twofold: first evaluate the safety margin to sodium boiling in taking into account the uncertainties on the input parameters of the CATHARE2 code (twenty-two uncertain input parameters have been identified, which can be classified into five groups: reactor power, accident management, pumps characteristics, reactivity coefficients, thermal parameters and head losses); secondly quantify the contribution of each input uncertainty to the overall uncertainty of the safety margins, in order to refocusing R&D efforts on the most influential factors. This paper focuses on the methodological aspects of the evaluation of the safety margin. At least for the preliminary phase of the project (conceptual design), a probabilistic criterion has been fixed in the context of this BEPU analysis; this criterion is the value of the margin to sodium boiling, which has a probability 95% to be exceeded, obtained with a confidence level of 95% (i.e. the M5,95percentile of the margin distribution). This paper presents two methods used to assess this percentile: the Wilks method and the Bootstrap method ; the effectiveness of the two methods is compared on the basis of 500 simulations performed with theCATHARE2 code. We conclude that, with only 100 simulations performed with the CATHARE2 code, which is a number of simulations workable in the conceptual design phase of the ASTRID project where the models and the hypothesis are often modified, it is best in order to evaluate the percentile M5,95 of the margin to sodium boiling to use the bootstrap method, which will provide a slightly conservative result. On the other hand, in order to obtain an accurate estimation of the percentileM5,95, for the safety report for example, it will be necessary to perform at least 300 simulations with the CATHARE2 code. In this case, both methods (Wilks and Bootstrap) would give equivalent results.

  10. Testing the robustness of optimal access vessel fleet selection for operation and maintenance of offshore wind farms

    DOE PAGES

    Sperstad, Iver Bakken; Stålhane, Magnus; Dinwoodie, Iain; ...

    2017-09-23

    Optimising the operation and maintenance (O&M) and logistics strategy of offshore wind farms implies the decision problem of selecting the vessel fleet for O&M. Different strategic decision support tools can be applied to this problem, but much uncertainty remains regarding both input data and modelling assumptions. Our paper aims to investigate and ultimately reduce this uncertainty by comparing four simulation tools, one mathematical optimisation tool and one analytic spreadsheet-based tool applied to select the O&M access vessel fleet that minimizes the total O&M cost of a reference wind farm. The comparison shows that the tools generally agree on the optimalmore » vessel fleet, but only partially agree on the relative ranking of the different vessel fleets in terms of total O&M cost. The robustness of the vessel fleet selection to various input data assumptions was tested, and the ranking was found to be particularly sensitive to the vessels' limiting significant wave height for turbine access. Also the parameter with the greatest discrepancy between the tools, implies that accurate quantification and modelling of this parameter is crucial. The ranking is moderately sensitive to turbine failure rates and vessel day rates but less sensitive to electricity price and vessel transit speed.« less

  11. Testing the robustness of optimal access vessel fleet selection for operation and maintenance of offshore wind farms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sperstad, Iver Bakken; Stålhane, Magnus; Dinwoodie, Iain

    Optimising the operation and maintenance (O&M) and logistics strategy of offshore wind farms implies the decision problem of selecting the vessel fleet for O&M. Different strategic decision support tools can be applied to this problem, but much uncertainty remains regarding both input data and modelling assumptions. Our paper aims to investigate and ultimately reduce this uncertainty by comparing four simulation tools, one mathematical optimisation tool and one analytic spreadsheet-based tool applied to select the O&M access vessel fleet that minimizes the total O&M cost of a reference wind farm. The comparison shows that the tools generally agree on the optimalmore » vessel fleet, but only partially agree on the relative ranking of the different vessel fleets in terms of total O&M cost. The robustness of the vessel fleet selection to various input data assumptions was tested, and the ranking was found to be particularly sensitive to the vessels' limiting significant wave height for turbine access. Also the parameter with the greatest discrepancy between the tools, implies that accurate quantification and modelling of this parameter is crucial. The ranking is moderately sensitive to turbine failure rates and vessel day rates but less sensitive to electricity price and vessel transit speed.« less

  12. Realising the Uncertainty Enabled Model Web

    NASA Astrophysics Data System (ADS)

    Cornford, D.; Bastin, L.; Pebesma, E. J.; Williams, M.; Stasch, C.; Jones, R.; Gerharz, L.

    2012-12-01

    The FP7 funded UncertWeb project aims to create the "uncertainty enabled model web". The central concept here is that geospatial models and data resources are exposed via standard web service interfaces, such as the Open Geospatial Consortium (OGC) suite of encodings and interface standards, allowing the creation of complex workflows combining both data and models. The focus of UncertWeb is on the issue of managing uncertainty in such workflows, and providing the standards, architecture, tools and software support necessary to realise the "uncertainty enabled model web". In this paper we summarise the developments in the first two years of UncertWeb, illustrating several key points with examples taken from the use case requirements that motivate the project. Firstly we address the issue of encoding specifications. We explain the usage of UncertML 2.0, a flexible encoding for representing uncertainty based on a probabilistic approach. This is designed to be used within existing standards such as Observations and Measurements (O&M) and data quality elements of ISO19115 / 19139 (geographic information metadata and encoding specifications) as well as more broadly outside the OGC domain. We show profiles of O&M that have been developed within UncertWeb and how UncertML 2.0 is used within these. We also show encodings based on NetCDF and discuss possible future directions for encodings in JSON. We then discuss the issues of workflow construction, considering discovery of resources (both data and models). We discuss why a brokering approach to service composition is necessary in a world where the web service interfaces remain relatively heterogeneous, including many non-OGC approaches, in particular the more mainstream SOAP and WSDL approaches. We discuss the trade-offs between delegating uncertainty management functions to the service interfaces themselves and integrating the functions in the workflow management system. We describe two utility services to address conversion between uncertainty types, and between the spatial / temporal support of service inputs / outputs. Finally we describe the tools being generated within the UncertWeb project, considering three main aspects: i) Elicitation of uncertainties on model inputs. We are developing tools to enable domain experts to provide judgements about input uncertainties from UncertWeb model components (e.g. parameters in meteorological models) which allow panels of experts to engage in the process and reach a consensus view on the current knowledge / beliefs about that parameter or variable. We are developing systems for continuous and categorical variables as well as stationary spatial fields. ii) Visualisation of the resulting uncertain outputs from the end of the workflow, but also at intermediate steps. At this point we have prototype implementations driven by the requirements from the use cases that motivate UncertWeb. iii) Sensitivity and uncertainty analysis on model outputs. Here we show the design of the overall system we are developing, including the deployment of an emulator framework to allow computationally efficient approaches. We conclude with a summary of the open issues and remaining challenges we are facing in UncertWeb, and provide a brief overview of how we plan to tackle these.

  13. A potato model intercomparison across varying climates and productivity levels.

    PubMed

    Fleisher, David H; Condori, Bruno; Quiroz, Roberto; Alva, Ashok; Asseng, Senthold; Barreda, Carolina; Bindi, Marco; Boote, Kenneth J; Ferrise, Roberto; Franke, Angelinus C; Govindakrishnan, Panamanna M; Harahagazwe, Dieudonne; Hoogenboom, Gerrit; Naresh Kumar, Soora; Merante, Paolo; Nendel, Claas; Olesen, Jorgen E; Parker, Phillip S; Raes, Dirk; Raymundo, Rubi; Ruane, Alex C; Stockle, Claudio; Supit, Iwan; Vanuytrecht, Eline; Wolf, Joost; Woli, Prem

    2017-03-01

    A potato crop multimodel assessment was conducted to quantify variation among models and evaluate responses to climate change. Nine modeling groups simulated agronomic and climatic responses at low-input (Chinoli, Bolivia and Gisozi, Burundi)- and high-input (Jyndevad, Denmark and Washington, United States) management sites. Two calibration stages were explored, partial (P1), where experimental dry matter data were not provided, and full (P2). The median model ensemble response outperformed any single model in terms of replicating observed yield across all locations. Uncertainty in simulated yield decreased from 38% to 20% between P1 and P2. Model uncertainty increased with interannual variability, and predictions for all agronomic variables were significantly different from one model to another (P < 0.001). Uncertainty averaged 15% higher for low- vs. high-input sites, with larger differences observed for evapotranspiration (ET), nitrogen uptake, and water use efficiency as compared to dry matter. A minimum of five partial, or three full, calibrated models was required for an ensemble approach to keep variability below that of common field variation. Model variation was not influenced by change in carbon dioxide (C), but increased as much as 41% and 23% for yield and ET, respectively, as temperature (T) or rainfall (W) moved away from historical levels. Increases in T accounted for the highest amount of uncertainty, suggesting that methods and parameters for T sensitivity represent a considerable unknown among models. Using median model ensemble values, yield increased on average 6% per 100-ppm C, declined 4.6% per °C, and declined 2% for every 10% decrease in rainfall (for nonirrigated sites). Differences in predictions due to model representation of light utilization were significant (P < 0.01). These are the first reported results quantifying uncertainty for tuber/root crops and suggest modeling assessments of climate change impact on potato may be improved using an ensemble approach. © 2016 John Wiley & Sons Ltd.

  14. What drives uncertainty in model diagnoses of carbon dynamics in southern US forests: climate, vegetation, disturbance, or model parameters?

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Gu, H.; Williams, C. A.

    2017-12-01

    Results from terrestrial carbon cycle models have multiple sources of uncertainty, each with its behavior and range. Their relative importance and how they combine has received little attention. This study investigates how various sources of uncertainty propagate, temporally and spatially, in CASA-Disturbance (CASA-D). CASA-D simulates the impact of climatic forcing and disturbance legacies on forest carbon dynamics with the following steps. Firstly, we infer annual growth and mortality rates from measured biomass stocks (FIA) over time and disturbance (e.g., fire, harvest, bark beetle) to represent annual post-disturbance carbon fluxes trajectories across forest types and site productivity settings. Then, annual carbon fluxes are estimated from these trajectories by using time since disturbance which is inferred from biomass (NBCD 2000) and disturbance maps (NAFD, MTBS and ADS). Finally, we apply monthly climatic scalars derived from default CASA to temporally distribute annual carbon fluxes to each month. This study assesses carbon flux uncertainty from two sources: driving data including climatic and forest biomass inputs, and three most sensitive parameters in CASA-D including maximum light use efficiency, temperature sensitivity of soil respiration (Q10) and optimum temperature identified by using EFAST (Extended Fourier Amplitude Sensitivity Testing). We quantify model uncertainties from each, and report their relative importance in estimating forest carbon sink/source in southeast United States from 2003 to 2010.

  15. Stream Discharge and Evapotranspiration Responses to Climate Change and Their Associated Uncertainties in a Large Semi-Arid Basin

    NASA Astrophysics Data System (ADS)

    Bassam, S.; Ren, J.

    2017-12-01

    Predicting future water availability in watersheds is very important for proper water resources management, especially in semi-arid regions with scarce water resources. Hydrological models have been considered as powerful tools in predicting future hydrological conditions in watershed systems in the past two decades. Streamflow and evapotranspiration are the two important components in watershed water balance estimation as the former is the most commonly-used indicator of the overall water budget estimation, and the latter is the second biggest component of water budget (biggest outflow from the system). One of the main concerns in watershed scale hydrological modeling is the uncertainties associated with model prediction, which could arise from errors in model parameters and input meteorological data, or errors in model representation of the physics of hydrological processes. Understanding and quantifying these uncertainties are vital to water resources managers for proper decision making based on model predictions. In this study, we evaluated the impacts of different climate change scenarios on the future stream discharge and evapotranspiration, and their associated uncertainties, throughout a large semi-arid basin using a stochastically-calibrated, physically-based, semi-distributed hydrological model. The results of this study could provide valuable insights in applying hydrological models in large scale watersheds, understanding the associated sensitivity and uncertainties in model parameters, and estimating the corresponding impacts on interested hydrological process variables under different climate change scenarios.

  16. Stochastic capture zone analysis of an arsenic-contaminated well using the generalized likelihood uncertainty estimator (GLUE) methodology

    NASA Astrophysics Data System (ADS)

    Morse, Brad S.; Pohll, Greg; Huntington, Justin; Rodriguez Castillo, Ramiro

    2003-06-01

    In 1992, Mexican researchers discovered concentrations of arsenic in excess of World Heath Organization (WHO) standards in several municipal wells in the Zimapan Valley of Mexico. This study describes a method to delineate a capture zone for one of the most highly contaminated wells to aid in future well siting. A stochastic approach was used to model the capture zone because of the high level of uncertainty in several input parameters. Two stochastic techniques were performed and compared: "standard" Monte Carlo analysis and the generalized likelihood uncertainty estimator (GLUE) methodology. The GLUE procedure differs from standard Monte Carlo analysis in that it incorporates a goodness of fit (termed a likelihood measure) in evaluating the model. This allows for more information (in this case, head data) to be used in the uncertainty analysis, resulting in smaller prediction uncertainty. Two likelihood measures are tested in this study to determine which are in better agreement with the observed heads. While the standard Monte Carlo approach does not aid in parameter estimation, the GLUE methodology indicates best fit models when hydraulic conductivity is approximately 10-6.5 m/s, with vertically isotropic conditions and large quantities of interbasin flow entering the basin. Probabilistic isochrones (capture zone boundaries) are then presented, and as predicted, the GLUE-derived capture zones are significantly smaller in area than those from the standard Monte Carlo approach.

  17. Probabilistic Structural Evaluation of Uncertainties in Radiator Sandwich Panel Design

    NASA Technical Reports Server (NTRS)

    Kuguoglu, Latife; Ludwiczak, Damian

    2006-01-01

    The Jupiter Icy Moons Orbiter (JIMO) Space System is part of the NASA's Prometheus Program. As part of the JIMO engineering team at NASA Glenn Research Center, the structural design of the JIMO Heat Rejection Subsystem (HRS) is evaluated. An initial goal of this study was to perform sensitivity analyses to determine the relative importance of the input variables on the structural responses of the radiator panel. The desire was to let the sensitivity analysis information identify the important parameters. The probabilistic analysis methods illustrated here support this objective. The probabilistic structural performance evaluation of a HRS radiator sandwich panel was performed. The radiator panel structural performance was assessed in the presence of uncertainties in the loading, fabrication process variables, and material properties. The stress and displacement contours of the deterministic structural analysis at mean probability was performed and results presented. It is followed by a probabilistic evaluation to determine the effect of the primitive variables on the radiator panel structural performance. Based on uncertainties in material properties, structural geometry and loading, the results of the displacement and stress analysis are used as an input file for the probabilistic analysis of the panel. The sensitivity of the structural responses, such as maximum displacement and maximum tensile and compressive stresses of the facesheet in x and y directions and maximum VonMises stresses of the tube, to the loading and design variables is determined under the boundary condition where all edges of the radiator panel are pinned. Based on this study, design critical material and geometric parameters of the considered sandwich panel are identified.

  18. Development of an Expert Judgement Elicitation and Calibration Methodology for Risk Analysis in Conceptual Vehicle Design

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Keating, Charles; Conway, Bruce; Chytka, Trina

    2004-01-01

    A comprehensive expert-judgment elicitation methodology to quantify input parameter uncertainty and analysis tool uncertainty in a conceptual launch vehicle design analysis has been developed. The ten-phase methodology seeks to obtain expert judgment opinion for quantifying uncertainties as a probability distribution so that multidisciplinary risk analysis studies can be performed. The calibration and aggregation techniques presented as part of the methodology are aimed at improving individual expert estimates, and provide an approach to aggregate multiple expert judgments into a single probability distribution. The purpose of this report is to document the methodology development and its validation through application to a reference aerospace vehicle. A detailed summary of the application exercise, including calibration and aggregation results is presented. A discussion of possible future steps in this research area is given.

  19. Uncertainty in Simulating Wheat Yields Under Climate Change

    NASA Technical Reports Server (NTRS)

    Asseng, S.; Ewert, F.; Rosenzweig, Cynthia; Jones, J. W.; Hatfield, J. W.; Ruane, A. C.; Boote, K. J.; Thornburn, P. J.; Rotter, R. P.; Cammarano, D.; hide

    2013-01-01

    Projections of climate change impacts on crop yields are inherently uncertain1. Uncertainty is often quantified when projecting future greenhouse gas emissions and their influence on climate2. However, multi-model uncertainty analysis of crop responses to climate change is rare because systematic and objective comparisons among process-based crop simulation models1,3 are difficult4. Here we present the largest standardized model intercomparison for climate change impacts so far. We found that individual crop models are able to simulate measured wheat grain yields accurately under a range of environments, particularly if the input information is sufficient. However, simulated climate change impacts vary across models owing to differences in model structures and parameter values. A greater proportion of the uncertainty in climate change impact projections was due to variations among crop models than to variations among downscaled general circulation models. Uncertainties in simulated impacts increased with CO2 concentrations and associated warming. These impact uncertainties can be reduced by improving temperature and CO2 relationships in models and better quantified through use of multi-model ensembles. Less uncertainty in describing how climate change may affect agricultural productivity will aid adaptation strategy development and policymaking.

  20. Uncertainty in Measurement: Procedures for Determining Uncertainty With Application to Clinical Laboratory Calculations.

    PubMed

    Frenkel, Robert B; Farrance, Ian

    2018-01-01

    The "Guide to the Expression of Uncertainty in Measurement" (GUM) is the foundational document of metrology. Its recommendations apply to all areas of metrology including metrology associated with the biomedical sciences. When the output of a measurement process depends on the measurement of several inputs through a measurement equation or functional relationship, the propagation of uncertainties in the inputs to the uncertainty in the output demands a level of understanding of the differential calculus. This review is intended as an elementary guide to the differential calculus and its application to uncertainty in measurement. The review is in two parts. In Part I, Section 3, we consider the case of a single input and introduce the concepts of error and uncertainty. Next we discuss, in the following sections in Part I, such notions as derivatives and differentials, and the sensitivity of an output to errors in the input. The derivatives of functions are obtained using very elementary mathematics. The overall purpose of this review, here in Part I and subsequently in Part II, is to present the differential calculus for those in the medical sciences who wish to gain a quick but accurate understanding of the propagation of uncertainties. © 2018 Elsevier Inc. All rights reserved.

  1. Nonlinear discrete-time multirate adaptive control of non-linear vibrations of smart beams

    NASA Astrophysics Data System (ADS)

    Georgiou, Georgios; Foutsitzi, Georgia A.; Stavroulakis, Georgios E.

    2018-06-01

    The nonlinear adaptive digital control of a smart piezoelectric beam is considered. It is shown that in the case of a sampled-data context, a multirate control strategy provides an appropriate framework in order to achieve vibration regulation, ensuring the stability of the whole control system. Under parametric uncertainties in the model parameters (damping ratios, frequencies, levels of non linearities and cross coupling, control input parameters), the scheme is completed with an adaptation law deduced from hyperstability concepts. This results in the asymptotic satisfaction of the control objectives at the sampling instants. Simulation results are presented.

  2. Monte Carlo exploration of Mikheyev-Smirnov-Wolfenstein solutions to the solar neutrino problem

    NASA Technical Reports Server (NTRS)

    Shi, X.; Schramm, D. N.; Bahcall, J. N.

    1992-01-01

    The paper explores the impact of astrophysical uncertainties on the Mikheyev-Smirnov-Wolfenstein (MSW) solution by calculating the allowed MSW solutions for 1000 different solar models with a Monte Carlo selection of solar model input parameters, assuming a full three-family MSW mixing. Applications are made to the chlorine, gallium, Kamiokande, and Borexino experiments. The initial GALLEX result limits the mixing parameters to the upper diagonal and the vertical regions of the MSW triangle. The expected event rates in the Borexino experiment are also calculated, assuming the MSW solutions implied by GALLEX.

  3. Bayesian inference to identify parameters in viscoelasticity

    NASA Astrophysics Data System (ADS)

    Rappel, Hussein; Beex, Lars A. A.; Bordas, Stéphane P. A.

    2017-08-01

    This contribution discusses Bayesian inference (BI) as an approach to identify parameters in viscoelasticity. The aims are: (i) to show that the prior has a substantial influence for viscoelasticity, (ii) to show that this influence decreases for an increasing number of measurements and (iii) to show how different types of experiments influence the identified parameters and their uncertainties. The standard linear solid model is the material description of interest and a relaxation test, a constant strain-rate test and a creep test are the tensile experiments focused on. The experimental data are artificially created, allowing us to make a one-to-one comparison between the input parameters and the identified parameter values. Besides dealing with the aforementioned issues, we believe that this contribution forms a comprehensible start for those interested in applying BI in viscoelasticity.

  4. Computing Fault Displacements from Surface Deformations

    NASA Technical Reports Server (NTRS)

    Lyzenga, Gregory; Parker, Jay; Donnellan, Andrea; Panero, Wendy

    2006-01-01

    Simplex is a computer program that calculates locations and displacements of subterranean faults from data on Earth-surface deformations. The calculation involves inversion of a forward model (given a point source representing a fault, a forward model calculates the surface deformations) for displacements, and strains caused by a fault located in isotropic, elastic half-space. The inversion involves the use of nonlinear, multiparameter estimation techniques. The input surface-deformation data can be in multiple formats, with absolute or differential positioning. The input data can be derived from multiple sources, including interferometric synthetic-aperture radar, the Global Positioning System, and strain meters. Parameters can be constrained or free. Estimates can be calculated for single or multiple faults. Estimates of parameters are accompanied by reports of their covariances and uncertainties. Simplex has been tested extensively against forward models and against other means of inverting geodetic data and seismic observations. This work

  5. Nonlinear gearshifts control of dual-clutch transmissions during inertia phase.

    PubMed

    Hu, Yunfeng; Tian, Lu; Gao, Bingzhao; Chen, Hong

    2014-07-01

    In this paper, a model-based nonlinear gearshift controller is designed by the backstepping method to improve the shift quality of vehicles with a dual-clutch transmission (DCT). Considering easy-implementation, the controller is rearranged into a concise structure which contains a feedforward control and a feedback control. Then, robustness of the closed-loop error system is discussed in the framework of the input to state stability (ISS) theory, where model uncertainties are considered as the additive disturbance inputs. Furthermore, due to the application of the backstepping method, the closed-loop error system is ordered as a linear system. Using the linear system theory, a guideline for selecting the controller parameters is deduced which could reduce the workload of parameters tuning. Finally, simulation results and Hardware in the Loop (HiL) simulation are presented to validate the effectiveness of the designed controller. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Can integrative catchment management mitigate future water quality issues caused by climate change and socio-economic development?

    NASA Astrophysics Data System (ADS)

    Honti, Mark; Schuwirth, Nele; Rieckermann, Jörg; Stamm, Christian

    2017-03-01

    The design and evaluation of solutions for integrated surface water quality management requires an integrated modelling approach. Integrated models have to be comprehensive enough to cover the aspects relevant for management decisions, allowing for mapping of larger-scale processes such as climate change to the regional and local contexts. Besides this, models have to be sufficiently simple and fast to apply proper methods of uncertainty analysis, covering model structure deficits and error propagation through the chain of sub-models. Here, we present a new integrated catchment model satisfying both conditions. The conceptual iWaQa model was developed to support the integrated management of small streams. It can be used to predict traditional water quality parameters, such as nutrients and a wide set of organic micropollutants (plant and material protection products), by considering all major pollutant pathways in urban and agricultural environments. Due to its simplicity, the model allows for a full, propagative analysis of predictive uncertainty, including certain structural and input errors. The usefulness of the model is demonstrated by predicting future surface water quality in a small catchment with mixed land use in the Swiss Plateau. We consider climate change, population growth or decline, socio-economic development, and the implementation of management strategies to tackle urban and agricultural point and non-point sources of pollution. Our results indicate that input and model structure uncertainties are the most influential factors for certain water quality parameters. In these cases model uncertainty is already high for present conditions. Nevertheless, accounting for today's uncertainty makes management fairly robust to the foreseen range of potential changes in the next decades. The assessment of total predictive uncertainty allows for selecting management strategies that show small sensitivity to poorly known boundary conditions. The identification of important sources of uncertainty helps to guide future monitoring efforts and pinpoints key indicators, whose evolution should be closely followed to adapt management. The possible impact of climate change is clearly demonstrated by water quality substantially changing depending on single climate model chains. However, when all climate trajectories are combined, the human land use and management decisions have a larger influence on water quality against a time horizon of 2050 in the study.

  7. On the distinguishability of HRF models in fMRI.

    PubMed

    Rosa, Paulo N; Figueiredo, Patricia; Silvestre, Carlos J

    2015-01-01

    Modeling the Hemodynamic Response Function (HRF) is a critical step in fMRI studies of brain activity, and it is often desirable to estimate HRF parameters with physiological interpretability. A biophysically informed model of the HRF can be described by a non-linear time-invariant dynamic system. However, the identification of this dynamic system may leave much uncertainty on the exact values of the parameters. Moreover, the high noise levels in the data may hinder the model estimation task. In this context, the estimation of the HRF may be seen as a problem of model falsification or invalidation, where we are interested in distinguishing among a set of eligible models of dynamic systems. Here, we propose a systematic tool to determine the distinguishability among a set of physiologically plausible HRF models. The concept of absolutely input-distinguishable systems is introduced and applied to a biophysically informed HRF model, by exploiting the structure of the underlying non-linear dynamic system. A strategy to model uncertainty in the input time-delay and magnitude is developed and its impact on the distinguishability of two physiologically plausible HRF models is assessed, in terms of the maximum noise amplitude above which it is not possible to guarantee the falsification of one model in relation to another. Finally, a methodology is proposed for the choice of the input sequence, or experimental paradigm, that maximizes the distinguishability of the HRF models under investigation. The proposed approach may be used to evaluate the performance of HRF model estimation techniques from fMRI data.

  8. Kinematic Structural Modelling in Bayesian Networks

    NASA Astrophysics Data System (ADS)

    Schaaf, Alexander; de la Varga, Miguel; Florian Wellmann, J.

    2017-04-01

    We commonly capture our knowledge about the spatial distribution of distinct geological lithologies in the form of 3-D geological models. Several methods exist to create these models, each with its own strengths and limitations. We present here an approach to combine the functionalities of two modeling approaches - implicit interpolation and kinematic modelling methods - into one framework, while explicitly considering parameter uncertainties and thus model uncertainty. In recent work, we proposed an approach to implement implicit modelling algorithms into Bayesian networks. This was done to address the issues of input data uncertainty and integration of geological information from varying sources in the form of geological likelihood functions. However, one general shortcoming of implicit methods is that they usually do not take any physical constraints into consideration, which can result in unrealistic model outcomes and artifacts. On the other hand, kinematic structural modelling intends to reconstruct the history of a geological system based on physically driven kinematic events. This type of modelling incorporates simplified, physical laws into the model, at the cost of a substantial increment of usable uncertain parameters. In the work presented here, we show an integration of these two different modelling methodologies, taking advantage of the strengths of both of them. First, we treat the two types of models separately, capturing the information contained in the kinematic models and their specific parameters in the form of likelihood functions, in order to use them in the implicit modelling scheme. We then go further and combine the two modelling approaches into one single Bayesian network. This enables the direct flow of information between the parameters of the kinematic modelling step and the implicit modelling step and links the exclusive input data and likelihoods of the two different modelling algorithms into one probabilistic inference framework. In addition, we use the capabilities of Noddy to analyze the topology of structural models to demonstrate how topological information, such as the connectivity of two layers across an unconformity, can be used as a likelihood function. In an application to a synthetic case study, we show that our approach leads to a successful combination of the two different modelling concepts. Specifically, we show that we derive ensemble realizations of implicit models that now incorporate the knowledge of the kinematic aspects, representing an important step forward in the integration of knowledge and a corresponding estimation of uncertainties in structural geological models.

  9. Soil warming response: field experiments to Earth system models

    NASA Astrophysics Data System (ADS)

    Todd-Brown, K. E.; Bradford, M.; Wieder, W. R.; Crowther, T. W.

    2017-12-01

    The soil carbon response to climate change is extremely uncertain at the global scale, in part because of the uncertainty in the magnitude of the temperature response. To address this uncertainty we collected data from 48 soil warming manipulations studies and examined the temperature response using two different methods. First, we constructed a mixed effects model and extrapolated the effect of soil warming on soil carbon stocks under anticipated shifts in surface temperature during the 21st century. We saw significant vulnerability of soil carbon stocks, especially in high carbon soils. To place this effect in the context of anticipated changes in carbon inputs and moisture shifts, we applied a one pool decay model with temperature sensitivities to the field data and imposed a post-hoc correction on the Earth system model simulations to integrate the field with the simulated temperature response. We found that there was a slight elevation in the overall soil carbon losses, but that the field uncertainty of the temperature sensitivity parameter was as large as the variation in the among model soil carbon projections. This implies that model-data integration is unlikely to constrain soil carbon simulations and highlights the importance of representing parameter uncertainty in these Earth system models to inform emissions targets.

  10. Uncertainty Quantification for Ice Sheet Science and Sea Level Projections

    NASA Astrophysics Data System (ADS)

    Boening, C.; Schlegel, N.; Limonadi, D.; Schodlok, M.; Seroussi, H. L.; Larour, E. Y.; Watkins, M. M.

    2017-12-01

    In order to better quantify uncertainties in global mean sea level rise projections and in particular upper bounds, we aim at systematically evaluating the contributions from ice sheets and potential for extreme sea level rise due to sudden ice mass loss. Here, we take advantage of established uncertainty quantification tools embedded within the Ice Sheet System Model (ISSM) as well as sensitivities to ice/ocean interactions using melt rates and melt potential derived from MITgcm/ECCO2. With the use of these tools, we conduct Monte-Carlo style sampling experiments on forward simulations of the Antarctic ice sheet, by varying internal parameters and boundary conditions of the system over both extreme and credible worst-case ranges. Uncertainty bounds for climate forcing are informed by CMIP5 ensemble precipitation and ice melt estimates for year 2100, and uncertainty bounds for ocean melt rates are derived from a suite of regional sensitivity experiments using MITgcm. Resulting statistics allow us to assess how regional uncertainty in various parameters affect model estimates of century-scale sea level rise projections. The results inform efforts to a) isolate the processes and inputs that are most responsible for determining ice sheet contribution to sea level; b) redefine uncertainty brackets for century-scale projections; and c) provide a prioritized list of measurements, along with quantitative information on spatial and temporal resolution, required for reducing uncertainty in future sea level rise projections. Results indicate that ice sheet mass loss is dependent on the spatial resolution of key boundary conditions - such as bedrock topography and melt rates at the ice-ocean interface. This work is performed at and supported by the California Institute of Technology's Jet Propulsion Laboratory. Supercomputing time is also supported through a contract with the National Aeronautics and Space Administration's Cryosphere program.

  11. Uncertainty and sensitivity assessments of an agricultural-hydrological model (RZWQM2) using the GLUE method

    NASA Astrophysics Data System (ADS)

    Sun, Mei; Zhang, Xiaolin; Huo, Zailin; Feng, Shaoyuan; Huang, Guanhua; Mao, Xiaomin

    2016-03-01

    Quantitatively ascertaining and analyzing the effects of model uncertainty on model reliability is a focal point for agricultural-hydrological models due to more uncertainties of inputs and processes. In this study, the generalized likelihood uncertainty estimation (GLUE) method with Latin hypercube sampling (LHS) was used to evaluate the uncertainty of the RZWQM-DSSAT (RZWQM2) model outputs responses and the sensitivity of 25 parameters related to soil properties, nutrient transport and crop genetics. To avoid the one-sided risk of model prediction caused by using a single calibration criterion, the combined likelihood (CL) function integrated information concerning water, nitrogen, and crop production was introduced in GLUE analysis for the predictions of the following four model output responses: the total amount of water content (T-SWC) and the nitrate nitrogen (T-NIT) within the 1-m soil profile, the seed yields of waxy maize (Y-Maize) and winter wheat (Y-Wheat). In the process of evaluating RZWQM2, measurements and meteorological data were obtained from a field experiment that involved a winter wheat and waxy maize crop rotation system conducted from 2003 to 2004 in southern Beijing. The calibration and validation results indicated that RZWQM2 model can be used to simulate the crop growth and water-nitrogen migration and transformation in wheat-maize crop rotation planting system. The results of uncertainty analysis using of GLUE method showed T-NIT was sensitive to parameters relative to nitrification coefficient, maize growth characteristics on seedling period, wheat vernalization period, and wheat photoperiod. Parameters on soil saturated hydraulic conductivity, nitrogen nitrification and denitrification, and urea hydrolysis played an important role in crop yield component. The prediction errors for RZWQM2 outputs with CL function were relatively lower and uniform compared with other likelihood functions composed of individual calibration criterion. This new and successful application of the GLUE method for determining the uncertainty and sensitivity of the RZWQM2 could provide a reference for the optimization of model parameters with different emphases according to research interests.

  12. Bayesian Methods for Effective Field Theories

    NASA Astrophysics Data System (ADS)

    Wesolowski, Sarah

    Microscopic predictions of the properties of atomic nuclei have reached a high level of precision in the past decade. This progress mandates improved uncertainty quantification (UQ) for a robust comparison of experiment with theory. With the uncertainty from many-body methods under control, calculations are now sensitive to the input inter-nucleon interactions. These interactions include parameters that must be fit to experiment, inducing both uncertainty from the fit and from missing physics in the operator structure of the Hamiltonian. Furthermore, the implementation of the inter-nucleon interactions is not unique, which presents the additional problem of assessing results using different interactions. Effective field theories (EFTs) take advantage of a separation of high- and low-energy scales in the problem to form a power-counting scheme that allows the organization of terms in the Hamiltonian based on their expected contribution to observable predictions. This scheme gives a natural framework for quantification of uncertainty due to missing physics. The free parameters of the EFT, called the low-energy constants (LECs), must be fit to data, but in a properly constructed EFT these constants will be natural-sized, i.e., of order unity. The constraints provided by the EFT, namely the size of the systematic uncertainty from truncation of the theory and the natural size of the LECs, are assumed information even before a calculation is performed or a fit is done. Bayesian statistical methods provide a framework for treating uncertainties that naturally incorporates prior information as well as putting stochastic and systematic uncertainties on an equal footing. For EFT UQ Bayesian methods allow the relevant EFT properties to be incorporated quantitatively as prior probability distribution functions (pdfs). Following the logic of probability theory, observable quantities and underlying physical parameters such as the EFT breakdown scale may be expressed as pdfs that incorporate the prior pdfs. Problems of model selection, such as distinguishing between competing EFT implementations, are also natural in a Bayesian framework. In this thesis we focus on two complementary topics for EFT UQ using Bayesian methods--quantifying EFT truncation uncertainty and parameter estimation for LECs. Using the order-by-order calculations and underlying EFT constraints as prior information, we show how to estimate EFT truncation uncertainties. We then apply the result to calculating truncation uncertainties on predictions of nucleon-nucleon scattering in chiral effective field theory. We apply model-checking diagnostics to our calculations to ensure that the statistical model of truncation uncertainty produces consistent results. A framework for EFT parameter estimation based on EFT convergence properties and naturalness is developed which includes a series of diagnostics to ensure the extraction of the maximum amount of available information from data to estimate LECs with minimal bias. We develop this framework using model EFTs and apply it to the problem of extrapolating lattice quantum chromodynamics results for the nucleon mass. We then apply aspects of the parameter estimation framework to perform case studies in chiral EFT parameter estimation, investigating a possible operator redundancy at fourth order in the chiral expansion and the appropriate inclusion of truncation uncertainty in estimating LECs.

  13. An Integrated Probabilistic-Fuzzy Assessment of Uncertainty Associated with Human Health Risk to MSW Landfill Leachate Contamination

    NASA Astrophysics Data System (ADS)

    Mishra, H.; Karmakar, S.; Kumar, R.

    2016-12-01

    Risk assessment will not remain simple when it involves multiple uncertain variables. Uncertainties in risk assessment majorly results from (1) the lack of knowledge of input variable (mostly random), and (2) data obtained from expert judgment or subjective interpretation of available information (non-random). An integrated probabilistic-fuzzy health risk approach has been proposed for simultaneous treatment of random and non-random uncertainties associated with input parameters of health risk model. The LandSim 2.5, a landfill simulator, has been used to simulate the Turbhe landfill (Navi Mumbai, India) activities for various time horizons. Further the LandSim simulated six heavy metals concentration in ground water have been used in the health risk model. The water intake, exposure duration, exposure frequency, bioavailability and average time are treated as fuzzy variables, while the heavy metals concentration and body weight are considered as probabilistic variables. Identical alpha-cut and reliability level are considered for fuzzy and probabilistic variables respectively and further, uncertainty in non-carcinogenic human health risk is estimated using ten thousand Monte-Carlo simulations (MCS). This is the first effort in which all the health risk variables have been considered as non-deterministic for the estimation of uncertainty in risk output. The non-exceedance probability of Hazard Index (HI), summation of hazard quotients, of heavy metals of Co, Cu, Mn, Ni, Zn and Fe for male and female population have been quantified and found to be high (HI>1) for all the considered time horizon, which evidently shows possibility of adverse health effects on the population residing near Turbhe landfill.

  14. Variability And Uncertainty Analysis Of Contaminant Transport Model Using Fuzzy Latin Hypercube Sampling Technique

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Nayagum, D.; Thornton, S.; Banwart, S.; Schuhmacher2, M.; Lerner, D.

    2006-12-01

    Characterization of uncertainty associated with groundwater quality models is often of critical importance, as for example in cases where environmental models are employed in risk assessment. Insufficient data, inherent variability and estimation errors of environmental model parameters introduce uncertainty into model predictions. However, uncertainty analysis using conventional methods such as standard Monte Carlo sampling (MCS) may not be efficient, or even suitable, for complex, computationally demanding models and involving different nature of parametric variability and uncertainty. General MCS or variant of MCS such as Latin Hypercube Sampling (LHS) assumes variability and uncertainty as a single random entity and the generated samples are treated as crisp assuming vagueness as randomness. Also when the models are used as purely predictive tools, uncertainty and variability lead to the need for assessment of the plausible range of model outputs. An improved systematic variability and uncertainty analysis can provide insight into the level of confidence in model estimates, and can aid in assessing how various possible model estimates should be weighed. The present study aims to introduce, Fuzzy Latin Hypercube Sampling (FLHS), a hybrid approach of incorporating cognitive and noncognitive uncertainties. The noncognitive uncertainty such as physical randomness, statistical uncertainty due to limited information, etc can be described by its own probability density function (PDF); whereas the cognitive uncertainty such estimation error etc can be described by the membership function for its fuzziness and confidence interval by ?-cuts. An important property of this theory is its ability to merge inexact generated data of LHS approach to increase the quality of information. The FLHS technique ensures that the entire range of each variable is sampled with proper incorporation of uncertainty and variability. A fuzzified statistical summary of the model results will produce indices of sensitivity and uncertainty that relate the effects of heterogeneity and uncertainty of input variables to model predictions. The feasibility of the method is validated to assess uncertainty propagation of parameter values for estimation of the contamination level of a drinking water supply well due to transport of dissolved phenolics from a contaminated site in the UK.

  15. Comparison of SAND-II and FERRET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wootan, D.W.; Schmittroth, F.

    1981-01-01

    A comparison was made of the advantages and disadvantages of two codes, SAND-II and FERRET, for determining the neutron flux spectrum and uncertainty from experimental dosimeter measurements as anticipated in the FFTF Reactor Characterization Program. This comparison involved an examination of the methodology and the operational performance of each code. The merits of each code were identified with respect to theoretical basis, directness of method, solution uniqueness, subjective influences, and sensitivity to various input parameters.

  16. Probabilistic accident consequence uncertainty analysis -- Uncertainty assessment for internal dosimetry. Volume 2: Appendices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.

    1998-04-01

    The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library ofmore » uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA internal dosimetry models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the panel on internal dosimetry, (4) short biographies of the experts, and (5) the aggregated results of their responses.« less

  17. Machine Learning Techniques for Global Sensitivity Analysis in Climate Models

    NASA Astrophysics Data System (ADS)

    Safta, C.; Sargsyan, K.; Ricciuto, D. M.

    2017-12-01

    Climate models studies are not only challenged by the compute intensive nature of these models but also by the high-dimensionality of the input parameter space. In our previous work with the land model components (Sargsyan et al., 2014) we identified subsets of 10 to 20 parameters relevant for each QoI via Bayesian compressive sensing and variance-based decomposition. Nevertheless the algorithms were challenged by the nonlinear input-output dependencies for some of the relevant QoIs. In this work we will explore a combination of techniques to extract relevant parameters for each QoI and subsequently construct surrogate models with quantified uncertainty necessary to future developments, e.g. model calibration and prediction studies. In the first step, we will compare the skill of machine-learning models (e.g. neural networks, support vector machine) to identify the optimal number of classes in selected QoIs and construct robust multi-class classifiers that will partition the parameter space in regions with smooth input-output dependencies. These classifiers will be coupled with techniques aimed at building sparse and/or low-rank surrogate models tailored to each class. Specifically we will explore and compare sparse learning techniques with low-rank tensor decompositions. These models will be used to identify parameters that are important for each QoI. Surrogate accuracy requirements are higher for subsequent model calibration studies and we will ascertain the performance of this workflow for multi-site ALM simulation ensembles.

  18. Uncertainties in Forecasting Streamflow using Entropy Theory

    NASA Astrophysics Data System (ADS)

    Cui, H.; Singh, V. P.

    2017-12-01

    Streamflow forecasting is essential in river restoration, reservoir operation, power generation, irrigation, navigation, and water management. However, there is always uncertainties accompanied in forecast, which may affect the forecasting results and lead to large variations. Therefore, uncertainties must be considered and be assessed properly when forecasting streamflow for water management. The aim of our work is to quantify the uncertainties involved in forecasting streamflow and provide reliable streamflow forecast. Despite that streamflow time series are stochastic, they exhibit seasonal and periodic patterns. Therefore, streamflow forecasting entails modeling seasonality, periodicity, and its correlation structure, and assessing uncertainties. This study applies entropy theory to forecast streamflow and measure uncertainties during the forecasting process. To apply entropy theory for streamflow forecasting, spectral analysis is combined to time series analysis, as spectral analysis can be employed to characterize patterns of streamflow variation and identify the periodicity of streamflow. That is, it permits to extract significant information for understanding the streamflow process and prediction thereof. Application of entropy theory for streamflow forecasting involves determination of spectral density, determination of parameters, and extension of autocorrelation function. The uncertainties brought by precipitation input, forecasting model and forecasted results are measured separately using entropy. With information theory, how these uncertainties transported and aggregated during these processes will be described.

  19. Monte Carlo simulation for uncertainty estimation on structural data in implicit 3-D geological modeling, a guide for disturbance distribution selection and parameterization

    NASA Astrophysics Data System (ADS)

    Pakyuz-Charrier, Evren; Lindsay, Mark; Ogarko, Vitaliy; Giraud, Jeremie; Jessell, Mark

    2018-04-01

    Three-dimensional (3-D) geological structural modeling aims to determine geological information in a 3-D space using structural data (foliations and interfaces) and topological rules as inputs. This is necessary in any project in which the properties of the subsurface matters; they express our understanding of geometries in depth. For that reason, 3-D geological models have a wide range of practical applications including but not restricted to civil engineering, the oil and gas industry, the mining industry, and water management. These models, however, are fraught with uncertainties originating from the inherent flaws of the modeling engines (working hypotheses, interpolator's parameterization) and the inherent lack of knowledge in areas where there are no observations combined with input uncertainty (observational, conceptual and technical errors). Because 3-D geological models are often used for impactful decision-making it is critical that all 3-D geological models provide accurate estimates of uncertainty. This paper's focus is set on the effect of structural input data measurement uncertainty propagation in implicit 3-D geological modeling. This aim is achieved using Monte Carlo simulation for uncertainty estimation (MCUE), a stochastic method which samples from predefined disturbance probability distributions that represent the uncertainty of the original input data set. MCUE is used to produce hundreds to thousands of altered unique data sets. The altered data sets are used as inputs to produce a range of plausible 3-D models. The plausible models are then combined into a single probabilistic model as a means to propagate uncertainty from the input data to the final model. In this paper, several improved methods for MCUE are proposed. The methods pertain to distribution selection for input uncertainty, sample analysis and statistical consistency of the sampled distribution. Pole vector sampling is proposed as a more rigorous alternative than dip vector sampling for planar features and the use of a Bayesian approach to disturbance distribution parameterization is suggested. The influence of incorrect disturbance distributions is discussed and propositions are made and evaluated on synthetic and realistic cases to address the sighted issues. The distribution of the errors of the observed data (i.e., scedasticity) is shown to affect the quality of prior distributions for MCUE. Results demonstrate that the proposed workflows improve the reliability of uncertainty estimation and diminish the occurrence of artifacts.

  20. Electron transport in furfural: dependence of the electron ranges on the cross sections and the energy loss distribution functions

    NASA Astrophysics Data System (ADS)

    Ellis-Gibbings, L.; Krupa, K.; Colmenares, R.; Blanco, F.; Muńoz, A.; Mendes, M.; Ferreira da Silva, F.; Limá Vieira, P.; Jones, D. B.; Brunger, M. J.; García, G.

    2016-09-01

    Recent theoretical and experimental studies have provided a complete set of differential and integral electron scattering cross section data from furfural over a broad energy range. The energy loss distribution functions have been determined in this study by averaging electron energy loss spectra for different incident energies and scattering angles. All these data have been used as input parameters for an event by event Monte Carlo simulation procedure to obtain the electron energy deposition patterns and electron ranges in liquid furfural. The dependence of these results on the input cross sections is then analysed to determine the uncertainty of the simulated values.

  1. Feedforward/feedback control synthesis for performance and robustness

    NASA Technical Reports Server (NTRS)

    Wie, Bong; Liu, Qiang

    1990-01-01

    Both feedforward and feedback control approaches for uncertain dynamical systems are investigated. The control design objective is to achieve a fast settling time (high performance) and robustness (insensitivity) to plant modeling uncertainty. Preshapong of an ideal, time-optimal control input using a 'tapped-delay' filter is shown to provide a rapid maneuver with robust performance. A robust, non-minimum-phase feedback controller is synthesized with particular emphasis on its proper implementation for a non-zero set-point control problem. The proposed feedforward/feedback control approach is robust for a certain class of uncertain dynamical systems, since the control input command computed for a given desired output does not depend on the plant parameters.

  2. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 5, Uncertainty and sensitivity analyses of gas and brine migration for undisturbed performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1993-08-01

    Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to migration of gas and brine from the undisturbed repository. Additional information about the 1992 PA is provided in other volumes. Volume 1 containsmore » an overview of WIPP PA and results of a preliminary comparison with 40 CFR 191, Subpart B. Volume 2 describes the technical basis for the performance assessment, including descriptions of the linked computational models used in the Monte Carlo analyses. Volume 3 contains the reference data base and values for input parameters used in consequence and probability modeling. Volume 4 contains uncertainty and sensitivity analyses with respect to the EPA`s Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Finally, guidance derived from the entire 1992 PA is presented in Volume 6. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect gas and brine migration from the undisturbed repository are: initial liquid saturation in the waste, anhydrite permeability, biodegradation-reaction stoichiometry, gas-generation rates for both corrosion and biodegradation under inundated conditions, and the permeability of the long-term shaft seal.« less

  3. Data and Model Uncertainties associated with Biogeochemical Groundwater Remediation and their impact on Decision Analysis

    NASA Astrophysics Data System (ADS)

    Pandey, S.; Vesselinov, V. V.; O'Malley, D.; Karra, S.; Hansen, S. K.

    2016-12-01

    Models and data are used to characterize the extent of contamination and remediation, both of which are dependent upon the complex interplay of processes ranging from geochemical reactions, microbial metabolism, and pore-scale mixing to heterogeneous flow and external forcings. Characterization is wrought with important uncertainties related to the model itself (e.g. conceptualization, model implementation, parameter values) and the data used for model calibration (e.g. sparsity, measurement errors). This research consists of two primary components: (1) Developing numerical models that incorporate the complex hydrogeology and biogeochemistry that drive groundwater contamination and remediation; (2) Utilizing novel techniques for data/model-based analyses (such as parameter calibration and uncertainty quantification) to aid in decision support for optimal uncertainty reduction related to characterization and remediation of contaminated sites. The reactive transport models are developed using PFLOTRAN and are capable of simulating a wide range of biogeochemical and hydrologic conditions that affect the migration and remediation of groundwater contaminants under diverse field conditions. Data/model-based analyses are achieved using MADS, which utilizes Bayesian methods and Information Gap theory to address the data/model uncertainties discussed above. We also use these tools to evaluate different models, which vary in complexity, in order to weigh and rank models based on model accuracy (in representation of existing observations), model parsimony (everything else being equal, models with smaller number of model parameters are preferred), and model robustness (related to model predictions of unknown future states). These analyses are carried out on synthetic problems, but are directly related to real-world problems; for example, the modeled processes and data inputs are consistent with the conditions at the Los Alamos National Laboratory contamination sites (RDX and Chromium).

  4. Modelling ecological and human exposure to POPs in Venice lagoon - Part II: Quantitative uncertainty and sensitivity analysis in coupled exposure models.

    PubMed

    Radomyski, Artur; Giubilato, Elisa; Ciffroy, Philippe; Critto, Andrea; Brochot, Céline; Marcomini, Antonio

    2016-11-01

    The study is focused on applying uncertainty and sensitivity analysis to support the application and evaluation of large exposure models where a significant number of parameters and complex exposure scenarios might be involved. The recently developed MERLIN-Expo exposure modelling tool was applied to probabilistically assess the ecological and human exposure to PCB 126 and 2,3,7,8-TCDD in the Venice lagoon (Italy). The 'Phytoplankton', 'Aquatic Invertebrate', 'Fish', 'Human intake' and PBPK models available in MERLIN-Expo library were integrated to create a specific food web to dynamically simulate bioaccumulation in various aquatic species and in the human body over individual lifetimes from 1932 until 1998. MERLIN-Expo is a high tier exposure modelling tool allowing propagation of uncertainty on the model predictions through Monte Carlo simulation. Uncertainty in model output can be further apportioned between parameters by applying built-in sensitivity analysis tools. In this study, uncertainty has been extensively addressed in the distribution functions to describe the data input and the effect on model results by applying sensitivity analysis techniques (screening Morris method, regression analysis, and variance-based method EFAST). In the exposure scenario developed for the Lagoon of Venice, the concentrations of 2,3,7,8-TCDD and PCB 126 in human blood turned out to be mainly influenced by a combination of parameters (half-lives of the chemicals, body weight variability, lipid fraction, food assimilation efficiency), physiological processes (uptake/elimination rates), environmental exposure concentrations (sediment, water, food) and eating behaviours (amount of food eaten). In conclusion, this case study demonstrated feasibility of MERLIN-Expo to be successfully employed in integrated, high tier exposure assessment. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Uncertainty analysis of vegetation distribution in the northern high latitudes during the 21st century with a dynamic vegetation model

    PubMed Central

    Jiang, Yueyang; Zhuang, Qianlai; Schaphoff, Sibyll; Sitch, Stephen; Sokolov, Andrei; Kicklighter, David; Melillo, Jerry

    2012-01-01

    This study aims to assess how high-latitude vegetation may respond under various climate scenarios during the 21st century with a focus on analyzing model parameters induced uncertainty and how this uncertainty compares to the uncertainty induced by various climates. The analysis was based on a set of 10,000 Monte Carlo ensemble Lund-Potsdam-Jena (LPJ) simulations for the northern high latitudes (45oN and polewards) for the period 1900–2100. The LPJ Dynamic Global Vegetation Model (LPJ-DGVM) was run under contemporary and future climates from four Special Report Emission Scenarios (SRES), A1FI, A2, B1, and B2, based on the Hadley Centre General Circulation Model (GCM), and six climate scenarios, X901M, X902L, X903H, X904M, X905L, and X906H from the Integrated Global System Model (IGSM) at the Massachusetts Institute of Technology (MIT). In the current dynamic vegetation model, some parameters are more important than others in determining the vegetation distribution. Parameters that control plant carbon uptake and light-use efficiency have the predominant influence on the vegetation distribution of both woody and herbaceous plant functional types. The relative importance of different parameters varies temporally and spatially and is influenced by climate inputs. In addition to climate, these parameters play an important role in determining the vegetation distribution in the region. The parameter-based uncertainties contribute most to the total uncertainty. The current warming conditions lead to a complexity of vegetation responses in the region. Temperate trees will be more sensitive to climate variability, compared with boreal forest trees and C3 perennial grasses. This sensitivity would result in a unanimous northward greenness migration due to anomalous warming in the northern high latitudes. Temporally, boreal needleleaved evergreen plants are projected to decline considerably, and a large portion of C3 perennial grass is projected to disappear by the end of the 21st century. In contrast, the area of temperate trees would increase, especially under the most extreme A1FI scenario. As the warming continues, the northward greenness expansion in the Arctic region could continue. PMID:22822437

  6. How model and input uncertainty impact maize yield simulations in West Africa

    NASA Astrophysics Data System (ADS)

    Waha, Katharina; Huth, Neil; Carberry, Peter; Wang, Enli

    2015-02-01

    Crop models are common tools for simulating crop yields and crop production in studies on food security and global change. Various uncertainties however exist, not only in the model design and model parameters, but also and maybe even more important in soil, climate and management input data. We analyze the performance of the point-scale crop model APSIM and the global scale crop model LPJmL with different climate and soil conditions under different agricultural management in the low-input maize-growing areas of Burkina Faso, West Africa. We test the models’ response to different levels of input information from little to detailed information on soil, climate (1961-2000) and agricultural management and compare the models’ ability to represent the observed spatial (between locations) and temporal variability (between years) in crop yields. We found that the resolution of different soil, climate and management information influences the simulated crop yields in both models. However, the difference between models is larger than between input data and larger between simulations with different climate and management information than between simulations with different soil information. The observed spatial variability can be represented well from both models even with little information on soils and management but APSIM simulates a higher variation between single locations than LPJmL. The agreement of simulated and observed temporal variability is lower due to non-climatic factors e.g. investment in agricultural research and development between 1987 and 1991 in Burkina Faso which resulted in a doubling of maize yields. The findings of our study highlight the importance of scale and model choice and show that the most detailed input data does not necessarily improve model performance.

  7. A robust method to forecast volcanic ash clouds

    USGS Publications Warehouse

    Denlinger, Roger P.; Pavolonis, Mike; Sieglaff, Justin

    2012-01-01

    Ash clouds emanating from volcanic eruption columns often form trails of ash extending thousands of kilometers through the Earth's atmosphere, disrupting air traffic and posing a significant hazard to air travel. To mitigate such hazards, the community charged with reducing flight risk must accurately assess risk of ash ingestion for any flight path and provide robust forecasts of volcanic ash dispersal. In response to this need, a number of different transport models have been developed for this purpose and applied to recent eruptions, providing a means to assess uncertainty in forecasts. Here we provide a framework for optimal forecasts and their uncertainties given any model and any observational data. This involves random sampling of the probability distributions of input (source) parameters to a transport model and iteratively running the model with different inputs, each time assessing the predictions that the model makes about ash dispersal by direct comparison with satellite data. The results of these comparisons are embodied in a likelihood function whose maximum corresponds to the minimum misfit between model output and observations. Bayes theorem is then used to determine a normalized posterior probability distribution and from that a forecast of future uncertainty in ash dispersal. The nature of ash clouds in heterogeneous wind fields creates a strong maximum likelihood estimate in which most of the probability is localized to narrow ranges of model source parameters. This property is used here to accelerate probability assessment, producing a method to rapidly generate a prediction of future ash concentrations and their distribution based upon assimilation of satellite data as well as model and data uncertainties. Applying this method to the recent eruption of Eyjafjallajökull in Iceland, we show that the 3 and 6 h forecasts of ash cloud location probability encompassed the location of observed satellite-determined ash cloud loads, providing an efficient means to assess all of the hazards associated with these ash clouds.

  8. Radius Determination of Solar-type Stars Using Asteroseismology: What to Expect from the Kepler Mission

    NASA Astrophysics Data System (ADS)

    Stello, Dennis; Chaplin, William J.; Bruntt, Hans; Creevey, Orlagh L.; García-Hernández, Antonio; Monteiro, Mario J. P. F. G.; Moya, Andrés; Quirion, Pierre-Olivier; Sousa, Sergio G.; Suárez, Juan-Carlos; Appourchaux, Thierry; Arentoft, Torben; Ballot, Jerome; Bedding, Timothy R.; Christensen-Dalsgaard, Jørgen; Elsworth, Yvonne; Fletcher, Stephen T.; García, Rafael A.; Houdek, Günter; Jiménez-Reyes, Sebastian J.; Kjeldsen, Hans; New, Roger; Régulo, Clara; Salabert, David; Toutain, Thierry

    2009-08-01

    For distant stars, as observed by the NASA Kepler satellite, parallax information is currently of fairly low quality and is not complete. This limits the precision with which the absolute sizes of the stars and their potential transiting planets can be determined by traditional methods. Asteroseismology will be used to aid the radius determination of stars observed during NASA's Kepler mission. We report on the recent asteroFLAG hare-and-hounds Exercise#2, where a group of "hares" simulated data of F-K main-sequence stars that a group of "hounds" sought to analyze, aimed at determining the stellar radii. We investigated stars in the range 9 < V < 15, both with and without parallaxes. We further test different uncertainties in T eff, and compare results with and without using asteroseismic constraints. Based on the asteroseismic large frequency spacing, obtained from simulations of 4 yr time series data from the Kepler mission, we demonstrate that the stellar radii can be correctly and precisely determined, when combined with traditional stellar parameters from the Kepler Input Catalogue. The radii found by the various methods used by each independent hound generally agree with the true values of the artificial stars to within 3%, when the large frequency spacing is used. This is 5-10 times better than the results where seismology is not applied. These results give strong confidence that radius estimation can be performed to better than 3% for solar-like stars using automatic pipeline reduction. Even when the stellar distance and luminosity are unknown we can obtain the same level of agreement. Given the uncertainties used for this exercise we find that the input log g and parallax do not help to constrain the radius, and that T eff and metallicity are the only parameters we need in addition to the large frequency spacing. It is the uncertainty in the metallicity that dominates the uncertainty in the radius.

  9. The Absolute Stability Analysis in Fuzzy Control Systems with Parametric Uncertainties and Reference Inputs

    NASA Astrophysics Data System (ADS)

    Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei

    This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.

  10. How uncertain is model-based prediction of copper loads in stormwater runoff?

    PubMed

    Lindblom, E; Ahlman, S; Mikkelsen, P S

    2007-01-01

    In this paper, we conduct a systematic analysis of the uncertainty related with estimating the total load of pollution (copper) from a separate stormwater drainage system, conditioned on a specific combination of input data, a dynamic conceptual pollutant accumulation-washout model and measurements (runoff volumes and pollutant masses). We use the generalized likelihood uncertainty estimation (GLUE) methodology and generate posterior parameter distributions that result in model outputs encompassing a significant number of the highly variable measurements. Given the applied pollution accumulation-washout model and a total of 57 measurements during one month, the total predicted copper masses can be predicted within a range of +/-50% of the median value. The message is that this relatively large uncertainty should be acknowledged in connection with posting statements about micropollutant loads as estimated from dynamic models, even when calibrated with on-site concentration data.

  11. A single-loop optimization method for reliability analysis with second order uncertainty

    NASA Astrophysics Data System (ADS)

    Xie, Shaojun; Pan, Baisong; Du, Xiaoping

    2015-08-01

    Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.

  12. Latin Hypercube Sampling (LHS) UNIX Library/Standalone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2004-05-13

    The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less

  13. Nonlinear finite element model updating for damage identification of civil structures using batch Bayesian estimation

    NASA Astrophysics Data System (ADS)

    Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.

    2017-02-01

    This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic structural FE models of a bridge pier and a moment resisting steel frame, are performed to validate the performance and accuracy of the presented nonlinear FE model updating approach and demonstrate its application to SHM. These validation studies show the excellent performance of the proposed framework for SHM and damage identification even in the presence of high measurement noise and/or way-out initial estimates of the model parameters. Furthermore, the detrimental effects of the input measurement noise on the performance of the proposed framework are illustrated and quantified through one of the validation studies.

  14. Application of Interval Predictor Models to Space Radiation Shielding

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy,Daniel P.; Norman, Ryan B.; Blattnig, Steve R.

    2016-01-01

    This paper develops techniques for predicting the uncertainty range of an output variable given input-output data. These models are called Interval Predictor Models (IPM) because they yield an interval valued function of the input. This paper develops IPMs having a radial basis structure. This structure enables the formal description of (i) the uncertainty in the models parameters, (ii) the predicted output interval, and (iii) the probability that a future observation would fall in such an interval. In contrast to other metamodeling techniques, this probabilistic certi cate of correctness does not require making any assumptions on the structure of the mechanism from which data are drawn. Optimization-based strategies for calculating IPMs having minimal spread while containing all the data are developed. Constraints for bounding the minimum interval spread over the continuum of inputs, regulating the IPMs variation/oscillation, and centering its spread about a target point, are used to prevent data over tting. Furthermore, we develop an approach for using expert opinion during extrapolation. This metamodeling technique is illustrated using a radiation shielding application for space exploration. In this application, we use IPMs to describe the error incurred in predicting the ux of particles resulting from the interaction between a high-energy incident beam and a target.

  15. On Time Delay Margin Estimation for Adaptive Control and Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2011-01-01

    This paper presents methods for estimating time delay margin for adaptive control of input delay systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent an adaptive law by a locally bounded linear approximation within a small time window. The time delay margin of this input delay system represents a local stability measure and is computed analytically by three methods: Pade approximation, Lyapunov-Krasovskii method, and the matrix measure method. These methods are applied to the standard model-reference adaptive control, s-modification adaptive law, and optimal control modification adaptive law. The windowing analysis results in non-unique estimates of the time delay margin since it is dependent on the length of a time window and parameters which vary from one time window to the next. The optimal control modification adaptive law overcomes this limitation in that, as the adaptive gain tends to infinity and if the matched uncertainty is linear, then the closed-loop input delay system tends to a LTI system. A lower bound of the time delay margin of this system can then be estimated uniquely without the need for the windowing analysis. Simulation results demonstrates the feasibility of the bounded linear stability method for time delay margin estimation.

  16. Assessing the impact of radiative parameter uncertainty on plant growth simulation

    NASA Astrophysics Data System (ADS)

    Viskari, T.; Serbin, S.; Dietze, M.; Shiklomanov, A. N.

    2015-12-01

    Current Earth system models do not adequately project either the magnitude or the sign of carbon fluxes and storage associated with the terrestrial carbon cycle resulting in significant uncertainties in their potential feedbacks on the future climate system. A primary reason for the current uncertainty in these models is the lack of observational constraints of key biomes at relevant spatial and temporal scales. There is an increasingly large and highly resolved amount of remotely sensed observations that can provide the critical model inputs. However, effectively incorporating these data requires the use of radiative transfer models and their associated assumptions. How these parameter assumptions and uncertainties affect model projections for, e.g., leaf physiology, soil temperature or growth has not been examined in depth. In this presentation we discuss the use of high spectral resolution observations at the near surface to landscape scales to inform ecosystem process modeling efforts, particularly the uncertainties related to properties describing the radiation regime within vegetation canopies and the impact on C cycle projections. We illustrate that leaf and wood radiative properties and their associated uncertainties have an important impact on projected forest carbon uptake and storage. We further show the need for a strong data constraint on these properties and discuss sources of this remotely sensed information and methods for data assimilation into models. We present our approach as an efficient means for understanding and correcting implicit assumptions and model structural deficiencies in radiation transfer in vegetation canopies. Ultimately, a better understanding of the radiation balance of ecosystems will improve regional and global scale C and energy balance projections.

  17. Assessment of BTEX-induced health risk under multiple uncertainties at a petroleum-contaminated site: An integrated fuzzy stochastic approach

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaodong; Huang, Guo H.

    2011-12-01

    Groundwater pollution has gathered more and more attention in the past decades. Conducting an assessment of groundwater contamination risk is desired to provide sound bases for supporting risk-based management decisions. Therefore, the objective of this study is to develop an integrated fuzzy stochastic approach to evaluate risks of BTEX-contaminated groundwater under multiple uncertainties. It consists of an integrated interval fuzzy subsurface modeling system (IIFMS) and an integrated fuzzy second-order stochastic risk assessment (IFSOSRA) model. The IIFMS is developed based on factorial design, interval analysis, and fuzzy sets approach to predict contaminant concentrations under hybrid uncertainties. Two input parameters (longitudinal dispersivity and porosity) are considered to be uncertain with known fuzzy membership functions, and intrinsic permeability is considered to be an interval number with unknown distribution information. A factorial design is conducted to evaluate interactive effects of the three uncertain factors on the modeling outputs through the developed IIFMS. The IFSOSRA model can systematically quantify variability and uncertainty, as well as their hybrids, presented as fuzzy, stochastic and second-order stochastic parameters in health risk assessment. The developed approach haw been applied to the management of a real-world petroleum-contaminated site within a western Canada context. The results indicate that multiple uncertainties, under a combination of information with various data-quality levels, can be effectively addressed to provide supports in identifying proper remedial efforts. A unique contribution of this research is the development of an integrated fuzzy stochastic approach for handling various forms of uncertainties associated with simulation and risk assessment efforts.

  18. Assessment of model behavior and acceptable forcing data uncertainty in the context of land surface soil moisture estimation

    NASA Astrophysics Data System (ADS)

    Dumedah, Gift; Walker, Jeffrey P.

    2017-03-01

    The sources of uncertainty in land surface models are numerous and varied, from inaccuracies in forcing data to uncertainties in model structure and parameterizations. Majority of these uncertainties are strongly tied to the overall makeup of the model, but the input forcing data set is independent with its accuracy usually defined by the monitoring or the observation system. The impact of input forcing data on model estimation accuracy has been collectively acknowledged to be significant, yet its quantification and the level of uncertainty that is acceptable in the context of the land surface model to obtain a competitive estimation remain mostly unknown. A better understanding is needed about how models respond to input forcing data and what changes in these forcing variables can be accommodated without deteriorating optimal estimation of the model. As a result, this study determines the level of forcing data uncertainty that is acceptable in the Joint UK Land Environment Simulator (JULES) to competitively estimate soil moisture in the Yanco area in south eastern Australia. The study employs hydro genomic mapping to examine the temporal evolution of model decision variables from an archive of values obtained from soil moisture data assimilation. The data assimilation (DA) was undertaken using the advanced Evolutionary Data Assimilation. Our findings show that the input forcing data have significant impact on model output, 35% in root mean square error (RMSE) for 5cm depth of soil moisture and 15% in RMSE for 15cm depth of soil moisture. This specific quantification is crucial to illustrate the significance of input forcing data spread. The acceptable uncertainty determined based on dominant pathway has been validated and shown to be reliable for all forcing variables, so as to provide optimal soil moisture. These findings are crucial for DA in order to account for uncertainties that are meaningful from the model standpoint. Moreover, our results point to a proper treatment of input forcing data in general land surface and hydrological model estimation.

  19. Representing radar rainfall uncertainty with ensembles based on a time-variant geostatistical error modelling approach

    NASA Astrophysics Data System (ADS)

    Cecinati, Francesca; Rico-Ramirez, Miguel Angel; Heuvelink, Gerard B. M.; Han, Dawei

    2017-05-01

    The application of radar quantitative precipitation estimation (QPE) to hydrology and water quality models can be preferred to interpolated rainfall point measurements because of the wide coverage that radars can provide, together with a good spatio-temporal resolutions. Nonetheless, it is often limited by the proneness of radar QPE to a multitude of errors. Although radar errors have been widely studied and techniques have been developed to correct most of them, residual errors are still intrinsic in radar QPE. An estimation of uncertainty of radar QPE and an assessment of uncertainty propagation in modelling applications is important to quantify the relative importance of the uncertainty associated to radar rainfall input in the overall modelling uncertainty. A suitable tool for this purpose is the generation of radar rainfall ensembles. An ensemble is the representation of the rainfall field and its uncertainty through a collection of possible alternative rainfall fields, produced according to the observed errors, their spatial characteristics, and their probability distribution. The errors are derived from a comparison between radar QPE and ground point measurements. The novelty of the proposed ensemble generator is that it is based on a geostatistical approach that assures a fast and robust generation of synthetic error fields, based on the time-variant characteristics of errors. The method is developed to meet the requirement of operational applications to large datasets. The method is applied to a case study in Northern England, using the UK Met Office NIMROD radar composites at 1 km resolution and at 1 h accumulation on an area of 180 km by 180 km. The errors are estimated using a network of 199 tipping bucket rain gauges from the Environment Agency. 183 of the rain gauges are used for the error modelling, while 16 are kept apart for validation. The validation is done by comparing the radar rainfall ensemble with the values recorded by the validation rain gauges. The validated ensemble is then tested on a hydrological case study, to show the advantage of probabilistic rainfall for uncertainty propagation. The ensemble spread only partially captures the mismatch between the modelled and the observed flow. The residual uncertainty can be attributed to other sources of uncertainty, in particular to model structural uncertainty, parameter identification uncertainty, uncertainty in other inputs, and uncertainty in the observed flow.

  20. On the use of published radiobiological parameters and the evaluation of NTCP models regarding lung pneumonitis in clinical breast radiotherapy.

    PubMed

    Svolos, Patricia; Tsougos, Ioannis; Kyrgias, Georgios; Kappas, Constantine; Theodorou, Kiki

    2011-04-01

    In this study we sought to evaluate and accent the importance of radiobiological parameter selection and implementation to the normal tissue complication probability (NTCP) models. The relative seriality (RS) and the Lyman-Kutcher-Burman (LKB) models were studied. For each model, a minimum and maximum set of radiobiological parameter sets was selected from the overall published sets applied in literature and a theoretical mean parameter set was computed. In order to investigate the potential model weaknesses in NTCP estimation and to point out the correct use of model parameters, these sets were used as input to the RS and the LKB model, estimating radiation induced complications for a group of 36 breast cancer patients treated with radiotherapy. The clinical endpoint examined was Radiation Pneumonitis. Each model was represented by a certain dose-response range when the selected parameter sets were applied. Comparing the models with their ranges, a large area of coincidence was revealed. If the parameter uncertainties (standard deviation) are included in the models, their area of coincidence might be enlarged, constraining even greater their predictive ability. The selection of the proper radiobiological parameter set for a given clinical endpoint is crucial. Published parameter values are not definite but should be accompanied by uncertainties, and one should be very careful when applying them to the NTCP models. Correct selection and proper implementation of published parameters provides a quite accurate fit of the NTCP models to the considered endpoint.

  1. A waste characterisation procedure for ADM1 implementation based on degradation kinetics.

    PubMed

    Girault, R; Bridoux, G; Nauleau, F; Poullain, C; Buffet, J; Steyer, J-P; Sadowski, A G; Béline, F

    2012-09-01

    In this study, a procedure accounting for degradation kinetics was developed to split the total COD of a substrate into each input state variable required for Anaerobic Digestion Model n°1. The procedure is based on the combination of batch experimental degradation tests ("anaerobic respirometry") and numerical interpretation of the results obtained (optimisation of the ADM1 input state variable set). The effects of the main operating parameters, such as the substrate to inoculum ratio in batch experiments and the origin of the inoculum, were investigated. Combined with biochemical fractionation of the total COD of substrates, this method enabled determination of an ADM1-consistent input state variable set for each substrate with affordable identifiability. The substrate to inoculum ratio in the batch experiments and the origin of the inoculum influenced input state variables. However, based on results modelled for a CSTR fed with the substrate concerned, these effects were not significant. Indeed, if the optimal ranges of these operational parameters are respected, uncertainty in COD fractionation is mainly limited to temporal variability of the properties of the substrates. As the method is based on kinetics and is easy to implement for a wide range of substrates, it is a very promising way to numerically predict the effect of design parameters on the efficiency of an anaerobic CSTR. This method thus promotes the use of modelling for the design and optimisation of anaerobic processes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Random Predictor Models for Rigorous Uncertainty Quantification: Part 2

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.

  3. Random Predictor Models for Rigorous Uncertainty Quantification: Part 1

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.

  4. Uncertainty of inhalation dose coefficients for representative physical and chemical forms of iodine-131

    NASA Astrophysics Data System (ADS)

    Harvey, Richard Paul, III

    Releases of radioactive material have occurred at various Department of Energy (DOE) weapons facilities and facilities associated with the nuclear fuel cycle in the generation of electricity. Many different radionuclides have been released to the environment with resulting exposure of the population to these various sources of radioactivity. Radioiodine has been released from a number of these facilities and is a potential public health concern due to its physical and biological characteristics. Iodine exists as various isotopes, but our focus is on 131I due to its relatively long half-life, its prevalence in atmospheric releases and its contribution to offsite dose. The assumption of physical and chemical form is speculated to have a profound impact on the deposition of radioactive material within the respiratory tract. In the case of iodine, it has been shown that more than one type of physical and chemical form may be released to, or exist in, the environment; iodine can exist as a particle or as a gas. The gaseous species can be further segregated based on chemical form: elemental, inorganic, and organic iodides. Chemical compounds in each class are assumed to behave similarly with respect to biochemistry. Studies at Oak Ridge National Laboratories have demonstrated that 131I is released as a particulate, as well as in elemental, inorganic and organic chemical form. The internal dose estimate from 131I may be very different depending on the effect that chemical form has on fractional deposition, gas uptake, and clearance in the respiratory tract. There are many sources of uncertainty in the estimation of environmental dose including source term, airborne transport of radionuclides, and internal dosimetry. Knowledge of uncertainty in internal dosimetry is essential for estimating dose to members of the public and for determining total uncertainty in dose estimation. Important calculational steps in any lung model is regional estimation of deposition fractions and gas uptake of radionuclides in various regions of the lung. Variability in regional radionuclide deposition within lung compartments may significantly contribute to the overall uncertainty of the lung model. The uncertainty of lung deposition and biological clearance is dependent upon physiological and anatomical parameters of individuals as well as characteristic parameters of the particulate material. These parameters introduce uncertainty into internal dose estimates due to their inherent variability. Anatomical and physiological input parameters are age and gender dependent. This work has determined the uncertainty in internal dose estimates and the sensitive parameters involved in modeling particulate deposition and gas uptake of different physical and chemical forms of 131I with age and gender dependencies.

  5. Neural network uncertainty assessment using Bayesian statistics: a remote sensing application

    NASA Technical Reports Server (NTRS)

    Aires, F.; Prigent, C.; Rossow, W. B.

    2004-01-01

    Neural network (NN) techniques have proved successful for many regression problems, in particular for remote sensing; however, uncertainty estimates are rarely provided. In this article, a Bayesian technique to evaluate uncertainties of the NN parameters (i.e., synaptic weights) is first presented. In contrast to more traditional approaches based on point estimation of the NN weights, we assess uncertainties on such estimates to monitor the robustness of the NN model. These theoretical developments are illustrated by applying them to the problem of retrieving surface skin temperature, microwave surface emissivities, and integrated water vapor content from a combined analysis of satellite microwave and infrared observations over land. The weight uncertainty estimates are then used to compute analytically the uncertainties in the network outputs (i.e., error bars and correlation structure of these errors). Such quantities are very important for evaluating any application of an NN model. The uncertainties on the NN Jacobians are then considered in the third part of this article. Used for regression fitting, NN models can be used effectively to represent highly nonlinear, multivariate functions. In this situation, most emphasis is put on estimating the output errors, but almost no attention has been given to errors associated with the internal structure of the regression model. The complex structure of dependency inside the NN is the essence of the model, and assessing its quality, coherency, and physical character makes all the difference between a blackbox model with small output errors and a reliable, robust, and physically coherent model. Such dependency structures are described to the first order by the NN Jacobians: they indicate the sensitivity of one output with respect to the inputs of the model for given input data. We use a Monte Carlo integration procedure to estimate the robustness of the NN Jacobians. A regularization strategy based on principal component analysis is proposed to suppress the multicollinearities in order to make these Jacobians robust and physically meaningful.

  6. Solar energy system economic evaluation for IBM System 3, Glendo, Wyoming

    NASA Technical Reports Server (NTRS)

    1980-01-01

    This analysis was based on the technical and economic models in f-chart design procedures with inputs based on the characteristics of the parameters of present worth of system cost over a projected twenty year life: life cycle savings, year of positive savings, and year of payback for the optimized solar energy system at each of the analysis sites. The sensitivity of the economic evaluation to uncertainties in constituent system and economic variables was also investigated.

  7. Feedback system design with an uncertain plant

    NASA Technical Reports Server (NTRS)

    Milich, D.; Valavani, L.; Athans, M.

    1986-01-01

    A method is developed to design a fixed-parameter compensator for a linear, time-invariant, SISO (single-input single-output) plant model characterized by significant structured, as well as unstructured, uncertainty. The controller minimizes the H(infinity) norm of the worst-case sensitivity function over the operating band and the resulting feedback system exhibits robust stability and robust performance. It is conjectured that such a robust nonadaptive control design technique can be used on-line in an adaptive control system.

  8. Multilayer Relaxation and Surface Energies of Metallic Surfaces

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Rodriguez, Agustin M.; Ferrante, John

    1994-01-01

    The perpendicular and parallel multilayer relaxations of fcc (210) surfaces are studied using equivalent crystal theory (ECT). A comparison with experimental and theoretical results is made for AI(210). The effect of uncertainties in the input parameters on the magnitudes and ordering of surface relaxations for this semiempirical method is estimated. A new measure of surface roughness is proposed. Predictions for the multilayer relaxations and surface energies of the (210) face of Cu and Ni are also included.

  9. Midplane neutral density profiles in the National Spherical Torus Experiment

    DOE PAGES

    Stotler, D. P.; Scotti, F.; Bell, R. E.; ...

    2015-08-13

    Atomic and molecular density data in the outer midplane of NSTX [Ono et al., Nucl. Fusion 40, 557 (2000)] are inferred from tangential camera data via a forward modeling procedure using the DEGAS 2 Monte Carlo neutral transport code. The observed Balmer-β light emission data from 17 shots during the 2010 NSTX campaign display no obvious trends with discharge parameters such as the divertor Balmer-α emission level or edge deuterium ion density. Simulations of 12 time slices in 7 of these discharges produce molecular densities near the vacuum vessel wall of 2–8 × 10 17 m –3 and atomic densitiesmore » ranging from 1 to 7 ×10 16 m –3; neither has a clear correlation with other parameters. Validation of the technique, begun in an earlier publication, is continued with an assessment of the sensitivity of the simulated camera image and neutral densities to uncertainties in the data input to the model. The simulated camera image is sensitive to the plasma profiles and virtually nothing else. The neutral densities at the vessel wall depend most strongly on the spatial distribution of the source; simulations with a localized neutral source yield densities within a factor of two of the baseline, uniform source, case. Furthermore, the uncertainties in the neutral densities associated with other model inputs and assumptions are ≤ 50%.« less

  10. A simple model for calculating air pollution within street canyons

    NASA Astrophysics Data System (ADS)

    Venegas, Laura E.; Mazzeo, Nicolás A.; Dezzutti, Mariana C.

    2014-04-01

    This paper introduces the Semi-Empirical Urban Street (SEUS) model. SEUS is a simple mathematical model based on the scaling of air pollution concentration inside street canyons employing the emission rate, the width of the canyon, the dispersive velocity scale and the background concentration. Dispersive velocity scale depends on turbulent motions related to wind and traffic. The parameterisations of these turbulent motions include two dimensionless empirical parameters. Functional forms of these parameters have been obtained from full scale data measured in street canyons at four European cities. The sensitivity of SEUS model is studied analytically. Results show that relative errors in the evaluation of the two dimensionless empirical parameters have less influence on model uncertainties than uncertainties in other input variables. The model estimates NO2 concentrations using a simple photochemistry scheme. SEUS is applied to estimate NOx and NO2 hourly concentrations in an irregular and busy street canyon in the city of Buenos Aires. The statistical evaluation of results shows that there is a good agreement between estimated and observed hourly concentrations (e.g. fractional bias are -10.3% for NOx and +7.8% for NO2). The agreement between the estimated and observed values has also been analysed in terms of its dependence on wind speed and direction. The model shows a better performance for wind speeds >2 m s-1 than for lower wind speeds and for leeward situations than for others. No significant discrepancies have been found between the results of the proposed model and that of a widely used operational dispersion model (OSPM), both using the same input information.

  11. Short stack modeling of degradation in solid oxide fuel cells. Part II. Sensitivity and interaction analysis

    NASA Astrophysics Data System (ADS)

    Gazzarri, J. I.; Kesler, O.

    In the first part of this two-paper series, we presented a numerical model of the impedance behaviour of a solid oxide fuel cell (SOFC) aimed at simulating the change in the impedance spectrum induced by contact degradation at the interconnect-electrode, and at the electrode-electrolyte interfaces. The purpose of that investigation was to develop a non-invasive diagnostic technique to identify degradation modes in situ. In the present paper, we appraise the predictive capabilities of the proposed method in terms of its robustness to uncertainties in the input parameters, many of which are very difficult to measure independently. We applied this technique to the degradation modes simulated in Part I, in addition to anode sulfur poisoning. Electrode delamination showed the highest robustness to input parameter variations, followed by interconnect oxidation and interconnect detachment. The most sensitive degradation mode was sulfur poisoning, due to strong parameter interactions. In addition, we simulate several simultaneous two-degradation-mode scenarios, assessing the method's capabilities and limitations for the prediction of electrochemical behaviour of SOFC's undergoing multiple simultaneous degradation modes.

  12. Reliability of a new biokinetic model of zirconium in internal dosimetry: part II, parameter sensitivity analysis.

    PubMed

    Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph

    2011-12-01

    The reliability of biokinetic models is essential for the assessment of internal doses and a radiation risk analysis for the public and occupational workers exposed to radionuclides. In the present study, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. In the first part of the paper, the parameter uncertainty was analyzed for two biokinetic models of zirconium (Zr); one was reported by the International Commission on Radiological Protection (ICRP), and one was developed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU). In the second part of the paper, the parameter uncertainties and distributions of the Zr biokinetic models evaluated in Part I are used as the model inputs for identifying the most influential parameters in the models. Furthermore, the most influential model parameter on the integral of the radioactivity of Zr over 50 y in source organs after ingestion was identified. The results of the systemic HMGU Zr model showed that over the first 10 d, the parameters of transfer rates between blood and other soft tissues have the largest influence on the content of Zr in the blood and the daily urinary excretion; however, after day 1,000, the transfer rate from bone to blood becomes dominant. For the retention in bone, the transfer rate from blood to bone surfaces has the most influence out to the endpoint of the simulation; the transfer rate from blood to the upper larger intestine contributes a lot in the later days; i.e., after day 300. The alimentary tract absorption factor (fA) influences mostly the integral of radioactivity of Zr in most source organs after ingestion.

  13. Uncertainty assessment in geodetic network adjustment by combining GUM and Monte-Carlo-simulations

    NASA Astrophysics Data System (ADS)

    Niemeier, Wolfgang; Tengen, Dieter

    2017-06-01

    In this article first ideas are presented to extend the classical concept of geodetic network adjustment by introducing a new method for uncertainty assessment as two-step analysis. In the first step the raw data and possible influencing factors are analyzed using uncertainty modeling according to GUM (Guidelines to the Expression of Uncertainty in Measurements). This approach is well established in metrology, but rarely adapted within Geodesy. The second step consists of Monte-Carlo-Simulations (MC-simulations) for the complete processing chain from raw input data and pre-processing to adjustment computations and quality assessment. To perform these simulations, possible realizations of raw data and the influencing factors are generated, using probability distributions for all variables and the established concept of pseudo-random number generators. Final result is a point cloud which represents the uncertainty of the estimated coordinates; a confidence region can be assigned to these point clouds, as well. This concept may replace the common concept of variance propagation and the quality assessment of adjustment parameters by using their covariance matrix. It allows a new way for uncertainty assessment in accordance with the GUM concept for uncertainty modelling and propagation. As practical example the local tie network in "Metsähovi Fundamental Station", Finland is used, where classical geodetic observations are combined with GNSS data.

  14. Some issues in uncertainty quantification and parameter tuning: a case study of convective parameterization scheme in the WRF regional climate model

    NASA Astrophysics Data System (ADS)

    Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.

    2011-12-01

    The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

  15. Uncertainty Quantification and Parameter Tuning: A Case Study of Convective Parameterization Scheme in the WRF Regional Climate Model

    NASA Astrophysics Data System (ADS)

    Qian, Y.; Yang, B.; Lin, G.; Leung, R.; Zhang, Y.

    2012-04-01

    The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. The latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

  16. Some issues in uncertainty quantification and parameter tuning: a case study of convective parameterization scheme in the WRF regional climate model

    NASA Astrophysics Data System (ADS)

    Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.

    2012-03-01

    The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic importance sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e. the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

  17. An algorithm for U-Pb isotope dilution data reduction and uncertainty propagation

    NASA Astrophysics Data System (ADS)

    McLean, N. M.; Bowring, J. F.; Bowring, S. A.

    2011-06-01

    High-precision U-Pb geochronology by isotope dilution-thermal ionization mass spectrometry is integral to a variety of Earth science disciplines, but its ultimate resolving power is quantified by the uncertainties of calculated U-Pb dates. As analytical techniques have advanced, formerly small sources of uncertainty are increasingly important, and thus previous simplifications for data reduction and uncertainty propagation are no longer valid. Although notable previous efforts have treated propagation of correlated uncertainties for the U-Pb system, the equations, uncertainties, and correlations have been limited in number and subject to simplification during propagation through intermediary calculations. We derive and present a transparent U-Pb data reduction algorithm that transforms raw isotopic data and measured or assumed laboratory parameters into the isotopic ratios and dates geochronologists interpret without making assumptions about the relative size of sample components. To propagate uncertainties and their correlations, we describe, in detail, a linear algebraic algorithm that incorporates all input uncertainties and correlations without limiting or simplifying covariance terms to propagate them though intermediate calculations. Finally, a weighted mean algorithm is presented that utilizes matrix elements from the uncertainty propagation algorithm to propagate random and systematic uncertainties for data comparison between other U-Pb labs and other geochronometers. The linear uncertainty propagation algorithms are verified with Monte Carlo simulations of several typical analyses. We propose that our algorithms be considered by the community for implementation to improve the collaborative science envisioned by the EARTHTIME initiative.

  18. A comprehensive evaluation of input data-induced uncertainty in nonpoint source pollution modeling

    NASA Astrophysics Data System (ADS)

    Chen, L.; Gong, Y.; Shen, Z.

    2015-11-01

    Watershed models have been used extensively for quantifying nonpoint source (NPS) pollution, but few studies have been conducted on the error-transitivity from different input data sets to NPS modeling. In this paper, the effects of four input data, including rainfall, digital elevation models (DEMs), land use maps, and the amount of fertilizer, on NPS simulation were quantified and compared. A systematic input-induced uncertainty was investigated using watershed model for phosphorus load prediction. Based on the results, the rain gauge density resulted in the largest model uncertainty, followed by DEMs, whereas land use and fertilizer amount exhibited limited impacts. The mean coefficient of variation for errors in single rain gauges-, multiple gauges-, ASTER GDEM-, NFGIS DEM-, land use-, and fertilizer amount information was 0.390, 0.274, 0.186, 0.073, 0.033 and 0.005, respectively. The use of specific input information, such as key gauges, is also highlighted to achieve the required model accuracy. In this sense, these results provide valuable information to other model-based studies for the control of prediction uncertainty.

  19. Uncertainty quantification of overpressure buildup through inverse modeling of compaction processes in sedimentary basins

    NASA Astrophysics Data System (ADS)

    Colombo, Ivo; Porta, Giovanni M.; Ruffo, Paolo; Guadagnini, Alberto

    2017-03-01

    This study illustrates a procedure conducive to a preliminary risk analysis of overpressure development in sedimentary basins characterized by alternating depositional events of sandstone and shale layers. The approach rests on two key elements: (1) forward modeling of fluid flow and compaction, and (2) application of a model-complexity reduction technique based on a generalized polynomial chaos expansion (gPCE). The forward model considers a one-dimensional vertical compaction processes. The gPCE model is then used in an inverse modeling context to obtain efficient model parameter estimation and uncertainty quantification. The methodology is applied to two field settings considered in previous literature works, i.e. the Venture Field (Scotian Shelf, Canada) and the Navarin Basin (Bering Sea, Alaska, USA), relying on available porosity and pressure information for model calibration. It is found that the best result is obtained when porosity and pressure data are considered jointly in the model calibration procedure. Uncertainty propagation from unknown input parameters to model outputs, such as pore pressure vertical distribution, is investigated and quantified. This modeling strategy enables one to quantify the relative importance of key phenomena governing the feedback between sediment compaction and fluid flow processes and driving the buildup of fluid overpressure in stratified sedimentary basins characterized by the presence of low-permeability layers. The results here illustrated (1) allow for diagnosis of the critical role played by the parameters of quantitative formulations linking porosity and permeability in compacted shales and (2) provide an explicit and detailed quantification of the effects of their uncertainty in field settings.

  20. Intelligent robust tracking control for a class of uncertain strict-feedback nonlinear systems.

    PubMed

    Chang, Yeong-Chan

    2009-02-01

    This paper addresses the problem of designing robust tracking controls for a large class of strict-feedback nonlinear systems involving plant uncertainties and external disturbances. The input and virtual input weighting matrices are perturbed by bounded time-varying uncertainties. An adaptive fuzzy-based (or neural-network-based) dynamic feedback tracking controller will be developed such that all the states and signals of the closed-loop system are bounded and the trajectory tracking error should be as small as possible. First, the adaptive approximators with linearly parameterized models are designed, and a partitioned procedure with respect to the developed adaptive approximators is proposed such that the implementation of the fuzzy (or neural network) basis functions depends only on the state variables but does not depend on the tuning approximation parameters. Furthermore, we extend to design the nonlinearly parameterized adaptive approximators. Consequently, the intelligent robust tracking control schemes developed in this paper possess the properties of computational simplicity and easy implementation. Finally, simulation examples are presented to demonstrate the effectiveness of the proposed control algorithms.

  1. pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling

    NASA Astrophysics Data System (ADS)

    Florian Wellmann, J.; Thiele, Sam T.; Lindsay, Mark D.; Jessell, Mark W.

    2016-03-01

    We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilize the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.

  2. pynoddy 1.0: an experimental platform for automated 3-D kinematic and potential field modelling

    NASA Astrophysics Data System (ADS)

    Wellmann, J. F.; Thiele, S. T.; Lindsay, M. D.; Jessell, M. W.

    2015-11-01

    We present a novel methodology for performing experiments with subsurface structural models using a set of flexible and extensible Python modules. We utilise the ability of kinematic modelling techniques to describe major deformational, tectonic, and magmatic events at low computational cost to develop experiments testing the interactions between multiple kinematic events, effect of uncertainty regarding event timing, and kinematic properties. These tests are simple to implement and perform, as they are automated within the Python scripting language, allowing the encapsulation of entire kinematic experiments within high-level class definitions and fully reproducible results. In addition, we provide a~link to geophysical potential-field simulations to evaluate the effect of parameter uncertainties on maps of gravity and magnetics. We provide relevant fundamental information on kinematic modelling and our implementation, and showcase the application of our novel methods to investigate the interaction of multiple tectonic events on a pre-defined stratigraphy, the effect of changing kinematic parameters on simulated geophysical potential-fields, and the distribution of uncertain areas in a full 3-D kinematic model, based on estimated uncertainties in kinematic input parameters. Additional possibilities for linking kinematic modelling to subsequent process simulations are discussed, as well as additional aspects of future research. Our modules are freely available on github, including documentation and tutorial examples, and we encourage the contribution to this project.

  3. Impact of influent data frequency and model structure on the quality of WWTP model calibration and uncertainty.

    PubMed

    Cierkens, Katrijn; Plano, Salvatore; Benedetti, Lorenzo; Weijers, Stefan; de Jonge, Jarno; Nopens, Ingmar

    2012-01-01

    Application of activated sludge models (ASMs) to full-scale wastewater treatment plants (WWTPs) is still hampered by the problem of model calibration of these over-parameterised models. This either requires expert knowledge or global methods that explore a large parameter space. However, a better balance in structure between the submodels (ASM, hydraulic, aeration, etc.) and improved quality of influent data result in much smaller calibration efforts. In this contribution, a methodology is proposed that links data frequency and model structure to calibration quality and output uncertainty. It is composed of defining the model structure, the input data, an automated calibration, confidence interval computation and uncertainty propagation to the model output. Apart from the last step, the methodology is applied to an existing WWTP using three models differing only in the aeration submodel. A sensitivity analysis was performed on all models, allowing the ranking of the most important parameters to select in the subsequent calibration step. The aeration submodel proved very important to get good NH(4) predictions. Finally, the impact of data frequency was explored. Lowering the frequency resulted in larger deviations of parameter estimates from their default values and larger confidence intervals. Autocorrelation due to high frequency calibration data has an opposite effect on the confidence intervals. The proposed methodology opens doors to facilitate and improve calibration efforts and to design measurement campaigns.

  4. Towards Characterization, Modeling, and Uncertainty Quantification in Multi-scale Mechanics of Oragnic-rich Shales

    NASA Astrophysics Data System (ADS)

    Abedi, S.; Mashhadian, M.; Noshadravan, A.

    2015-12-01

    Increasing the efficiency and sustainability in operation of hydrocarbon recovery from organic-rich shales requires a fundamental understanding of chemomechanical properties of organic-rich shales. This understanding is manifested in form of physics-bases predictive models capable of capturing highly heterogeneous and multi-scale structure of organic-rich shale materials. In this work we present a framework of experimental characterization, micromechanical modeling, and uncertainty quantification that spans from nanoscale to macroscale. Application of experiments such as coupled grid nano-indentation and energy dispersive x-ray spectroscopy and micromechanical modeling attributing the role of organic maturity to the texture of the material, allow us to identify unique clay mechanical properties among different samples that are independent of maturity of shale formations and total organic content. The results can then be used to inform the physically-based multiscale model for organic rich shales consisting of three levels that spans from the scale of elementary building blocks (e.g. clay minerals in clay-dominated formations) of organic rich shales to the scale of the macroscopic inorganic/organic hard/soft inclusion composite. Although this approach is powerful in capturing the effective properties of organic-rich shale in an average sense, it does not account for the uncertainty in compositional and mechanical model parameters. Thus, we take this model one step forward by systematically incorporating the main sources of uncertainty in modeling multiscale behavior of organic-rich shales. In particular we account for the uncertainty in main model parameters at different scales such as porosity, elastic properties and mineralogy mass percent. To that end, we use Maximum Entropy Principle and random matrix theory to construct probabilistic descriptions of model inputs based on available information. The Monte Carlo simulation is then carried out to propagate the uncertainty and consequently construct probabilistic descriptions of properties at multiple length-scales. The combination of experimental characterization and stochastic multi-scale modeling presented in this work improves the robustness in the prediction of essential subsurface parameters in engineering scale.

  5. A probabilistic approach to emissions from transportation sector in the coming decades

    NASA Astrophysics Data System (ADS)

    Yan, F.; Winijkul, E.; Bond, T. C.; Streets, D. G.

    2010-12-01

    Future emission estimates are necessary for understanding climate change, designing national and international strategies for air quality control and evaluating mitigation policies. Emission inventories are uncertain and future projections even more so. Most current emission projection models are deterministic; in other words, there is only single answer for each scenario. As a result, uncertainties have not been included in the estimation of climate forcing or other environmental effects, but it is important to quantify the uncertainty inherent in emission projections. We explore uncertainties of emission projections from transportation sector in the coming decades by sensitivity analysis and Monte Carlo simulations. These projections are based on a technology driven model: the Speciated Pollutants Emission Wizard (SPEW)-Trend, which responds to socioeconomic conditions in different economic and mitigation scenarios. The model contains detail about technology stock, including consumption growth rates, retirement rates, timing of emission standards, deterioration rates and transition rates from normal vehicles to vehicles with extremely high emission factors (termed “superemitters”). However, understanding of these parameters, as well as relationships with socioeconomic conditions, is uncertain. We project emissions from transportation sectors under four different IPCC scenarios (A1B, A2, B1, and B2). Due to the later implementation of advanced emission standards, Africa has the highest annual growth rate (1.2-3.1%) from 2010 to 2050. Superemitters begin producing more than 50% of global emissions around year 2020. We estimate uncertainties from the relationships between technological change and socioeconomic conditions and examine their impact on future emissions. Sensitivities to parameters governing retirement rates are highest, causing changes in global emissions from-26% to +55% on average from 2010 to 2050. We perform Monte Carlo simulations to examine how these uncertainties will affect total emissions if any input parameter that has inherent the uncertainties is substituted by a range of values-probability distribution and varies at the same time; the 95% confidence interval of global emission annual growth rate is -1.9% to +0.2% per year.

  6. The uncertainty cascade in flood risk assessment under changing climatic conditions - the Biala Tarnowska case study

    NASA Astrophysics Data System (ADS)

    Doroszkiewicz, Joanna; Romanowicz, Renata

    2016-04-01

    Uncertainty in the results of the hydraulic model is not only associated with the limitations of that model and the shortcomings of data. An important factor that has a major impact on the uncertainty of the flood risk assessment in a changing climate conditions is associated with the uncertainty of future climate scenarios (IPCC WG I, 2013). Future climate projections provided by global climate models are used to generate future runoff required as an input to hydraulic models applied in the derivation of flood risk maps. Biala Tarnowska catchment, situated in southern Poland is used as a case study. Future discharges at the input to a hydraulic model are obtained using the HBV model and climate projections obtained from the EUROCORDEX project. The study describes a cascade of uncertainty related to different stages of the process of derivation of flood risk maps under changing climate conditions. In this context it takes into account the uncertainty of future climate projections, an uncertainty of flow routing model, the propagation of that uncertainty through the hydraulic model, and finally, the uncertainty related to the derivation of flood risk maps. One of the aims of this study is an assessment of a relative impact of different sources of uncertainty on the uncertainty of flood risk maps. Due to the complexity of the process, an assessment of total uncertainty of maps of inundation probability might be very computer time consuming. As a way forward we present an application of a hydraulic model simulator based on a nonlinear transfer function model for the chosen locations along the river reach. The transfer function model parameters are estimated based on the simulations of the hydraulic model at each of the model cross-section. The study shows that the application of the simulator substantially reduces the computer requirements related to the derivation of flood risk maps under future climatic conditions. Acknowledgements: This work was supported by the project CHIHE (Climate Change Impact on Hydrological Extremes), carried out in the Institute of Geophysics Polish Academy of Sciences, funded by Norway Grants (contract No. Pol-Nor/196243/80/2013). The hydro-meteorological observations were provided by the Institute of Meteorology and Water Management (IMGW), Poland.

  7. Analysing uncertainties of supply and demand in the future use of hydrogen as an energy vector

    NASA Astrophysics Data System (ADS)

    Lenel, U. R.; Davies, D. G. S.; Moore, M. A.

    An analytical technique (Analysis with Uncertain Qualities), developed at Fulmer, is being used to examine the sensitivity of the outcome to uncertainties in input quantities in order to highlight which input quantities critically affect the potential role of hydrogen. The work presented here includes an outline of the model and the analysis technique, along with basic considerations of the input quantities to the model (demand, supply and constraints). Some examples are given of probabilistic estimates of input quantities.

  8. ecode - Electron Transport Algorithm Testing v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene

    2016-10-05

    ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less

  9. The future viability of algae-derived biodiesel under economic and technical uncertainties.

    PubMed

    Brownbridge, George; Azadi, Pooya; Smallbone, Andrew; Bhave, Amit; Taylor, Benjamin; Kraft, Markus

    2014-01-01

    This study presents a techno-economic assessment of algae-derived biodiesel under economic and technical uncertainties associated with the development of algal biorefineries. A global sensitivity analysis was performed using a High Dimensional Model Representation (HDMR) method. It was found that, considering reasonable ranges over which each parameter can vary, the sensitivity of the biodiesel production cost to the key input parameters decreases in the following order: algae oil content>algae annual productivity per unit area>plant production capacity>carbon price increase rate. It was also found that the Return on Investment (ROI) is highly sensitive to the algae oil content, and to a lesser extent to the algae annual productivity, crude oil price and price increase rate, plant production capacity, and carbon price increase rate. For a large scale plant (100,000 tonnes of biodiesel per year) the production cost of biodiesel is likely to be £0.8-1.6 per kg. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE PAGES

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    2016-01-01

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  11. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mink, S. E. de; Belczynski, K., E-mail: S.E.deMink@uva.nl, E-mail: kbelczyn@astrouw.edu.pl

    The initial mass function (IMF), binary fraction, and distributions of binary parameters (mass ratios, separations, and eccentricities) are indispensable inputs for simulations of stellar populations. It is often claimed that these are poorly constrained, significantly affecting evolutionary predictions. Recently, dedicated observing campaigns have provided new constraints on the initial conditions for massive stars. Findings include a larger close binary fraction and a stronger preference for very tight systems. We investigate the impact on the predicted merger rates of neutron stars and black holes. Despite the changes with previous assumptions, we only find an increase of less than a factor ofmore » 2 (insignificant compared with evolutionary uncertainties of typically a factor of 10–100). We further show that the uncertainties in the new initial binary properties do not significantly affect (within a factor of 2) our predictions of double compact object merger rates. An exception is the uncertainty in IMF (variations by a factor of 6 up and down). No significant changes in the distributions of final component masses, mass ratios, chirp masses, and delay times are found. We conclude that the predictions are, for practical purposes, robust against uncertainties in the initial conditions concerning binary parameters, with the exception of the IMF. This eliminates an important layer of the many uncertain assumptions affecting the predictions of merger detection rates with the gravitational wave detectors aLIGO/aVirgo.« less

  13. System/observer/controller identification toolbox

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Horta, Lucas G.; Phan, Minh

    1992-01-01

    System Identification is the process of constructing a mathematical model from input and output data for a system under testing, and characterizing the system uncertainties and measurement noises. The mathematical model structure can take various forms depending upon the intended use. The SYSTEM/OBSERVER/CONTROLLER IDENTIFICATION TOOLBOX (SOCIT) is a collection of functions, written in MATLAB language and expressed in M-files, that implements a variety of modern system identification techniques. For an open loop system, the central features of the SOCIT are functions for identification of a system model and its corresponding forward and backward observers directly from input and output data. The system and observers are represented by a discrete model. The identified model and observers may be used for controller design of linear systems as well as identification of modal parameters such as dampings, frequencies, and mode shapes. For a closed-loop system, an observer and its corresponding controller gain directly from input and output data.

  14. A methodology for uncertainty quantification in quantitative technology valuation based on expert elicitation

    NASA Astrophysics Data System (ADS)

    Akram, Muhammad Farooq Bin

    The management of technology portfolios is an important element of aerospace system design. New technologies are often applied to new product designs to ensure their competitiveness at the time they are introduced to market. The future performance of yet-to- be designed components is inherently uncertain, necessitating subject matter expert knowledge, statistical methods and financial forecasting. Estimates of the appropriate parameter settings often come from disciplinary experts, who may disagree with each other because of varying experience and background. Due to inherent uncertain nature of expert elicitation in technology valuation process, appropriate uncertainty quantification and propagation is very critical. The uncertainty in defining the impact of an input on performance parameters of a system makes it difficult to use traditional probability theory. Often the available information is not enough to assign the appropriate probability distributions to uncertain inputs. Another problem faced during technology elicitation pertains to technology interactions in a portfolio. When multiple technologies are applied simultaneously on a system, often their cumulative impact is non-linear. Current methods assume that technologies are either incompatible or linearly independent. It is observed that in case of lack of knowledge about the problem, epistemic uncertainty is the most suitable representation of the process. It reduces the number of assumptions during the elicitation process, when experts are forced to assign probability distributions to their opinions without sufficient knowledge. Epistemic uncertainty can be quantified by many techniques. In present research it is proposed that interval analysis and Dempster-Shafer theory of evidence are better suited for quantification of epistemic uncertainty in technology valuation process. Proposed technique seeks to offset some of the problems faced by using deterministic or traditional probabilistic approaches for uncertainty propagation. Non-linear behavior in technology interactions is captured through expert elicitation based technology synergy matrices (TSM). Proposed TSMs increase the fidelity of current technology forecasting methods by including higher order technology interactions. A test case for quantification of epistemic uncertainty on a large scale problem of combined cycle power generation system was selected. A detailed multidisciplinary modeling and simulation environment was adopted for this problem. Results have shown that evidence theory based technique provides more insight on the uncertainties arising from incomplete information or lack of knowledge as compared to deterministic or probability theory methods. Margin analysis was also carried out for both the techniques. A detailed description of TSMs and their usage in conjunction with technology impact matrices and technology compatibility matrices is discussed. Various combination methods are also proposed for higher order interactions, which can be applied according to the expert opinion or historical data. The introduction of technology synergy matrix enabled capturing the higher order technology interactions, and improvement in predicted system performance.

  15. A Python Interface for the Dakota Iterative Systems Analysis Toolkit

    NASA Astrophysics Data System (ADS)

    Piper, M.; Hutton, E.; Syvitski, J. P.

    2016-12-01

    Uncertainty quantification is required to improve the accuracy, reliability, and accountability of Earth science models. Dakota is a software toolkit, developed at Sandia National Laboratories, that provides an interface between models and a library of analysis methods, including support for sensitivity analysis, uncertainty quantification, optimization, and calibration techniques. Dakota is a powerful tool, but its learning curve is steep: the user not only must understand the structure and syntax of the Dakota input file, but also must develop intermediate code, called an analysis driver, that allows Dakota to run a model. The CSDMS Dakota interface (CDI) is a Python package that wraps and extends Dakota's user interface. It simplifies the process of configuring and running a Dakota experiment. A user can program to the CDI, allowing a Dakota experiment to be scripted. The CDI creates Dakota input files and provides a generic analysis driver. Any model written in Python that exposes a Basic Model Interface (BMI), as well as any model componentized in the CSDMS modeling framework, automatically works with the CDI. The CDI has a plugin architecture, so models written in other languages, or those that don't expose a BMI, can be accessed by the CDI by programmatically extending a template; an example is provided in the CDI distribution. Currently, six Dakota analysis methods have been implemented for examples from the much larger Dakota library. To demonstrate the CDI, we performed an uncertainty quantification experiment with the HydroTrend hydrological water balance and transport model. In the experiment, we evaluated the response of long-term suspended sediment load at the river mouth (Qs) to uncertainty in two input parameters, annual mean temperature (T) and precipitation (P), over a series of 100-year runs, using the polynomial chaos method. Through Dakota, we calculated moments, local and global (Sobol') sensitivity indices, and probability density and cumulative distribution functions for the response.

  16. Uncertainty in the Modeling of Tsunami Sediment Transport

    NASA Astrophysics Data System (ADS)

    Jaffe, B. E.; Sugawara, D.; Goto, K.; Gelfenbaum, G. R.; La Selle, S.

    2016-12-01

    Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. A recent study (Jaffe et al., 2016) explores sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami properties, study site characteristics, available input data, sediment grain size, and the model used. Although uncertainty has the potential to be large, case studies for both forward and inverse models have shown that sediment transport modeling provides useful information on tsunami inundation and hydrodynamics that can be used to improve tsunami hazard assessment. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and the development of hybrid modeling approaches to exploit the strengths of forward and inverse models. As uncertainty in tsunami sediment transport modeling is reduced, and with increased ability to quantify uncertainty, the geologic record of tsunamis will become more valuable in the assessment of tsunami hazard. Jaffe, B., Goto, K., Sugawara, D., Gelfenbaum, G., and La Selle, S., "Uncertainty in Tsunami Sediment Transport Modeling", Journal of Disaster Research Vol. 11 No. 4, pp. 647-661, 2016, doi: 10.20965/jdr.2016.p0647 https://www.fujipress.jp/jdr/dr/dsstr001100040647/

  17. Groundwater Model Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process ofmore » stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation data to constrain model input parameters is shown for the second case study using a Bayesian approach known as Markov Chain Monte Carlo. The approach shows a great potential to be helpful in the validation process and in incorporating prior knowledge with new field data to derive posterior distributions for both model input and output.« less

  18. An efficient framework for optimization and parameter sensitivity analysis in arterial growth and remodeling computations

    PubMed Central

    Sankaran, Sethuraman; Humphrey, Jay D.; Marsden, Alison L.

    2013-01-01

    Computational models for vascular growth and remodeling (G&R) are used to predict the long-term response of vessels to changes in pressure, flow, and other mechanical loading conditions. Accurate predictions of these responses are essential for understanding numerous disease processes. Such models require reliable inputs of numerous parameters, including material properties and growth rates, which are often experimentally derived, and inherently uncertain. While earlier methods have used a brute force approach, systematic uncertainty quantification in G&R models promises to provide much better information. In this work, we introduce an efficient framework for uncertainty quantification and optimal parameter selection, and illustrate it via several examples. First, an adaptive sparse grid stochastic collocation scheme is implemented in an established G&R solver to quantify parameter sensitivities, and near-linear scaling with the number of parameters is demonstrated. This non-intrusive and parallelizable algorithm is compared with standard sampling algorithms such as Monte-Carlo. Second, we determine optimal arterial wall material properties by applying robust optimization. We couple the G&R simulator with an adaptive sparse grid collocation approach and a derivative-free optimization algorithm. We show that an artery can achieve optimal homeostatic conditions over a range of alterations in pressure and flow; robustness of the solution is enforced by including uncertainty in loading conditions in the objective function. We then show that homeostatic intramural and wall shear stress is maintained for a wide range of material properties, though the time it takes to achieve this state varies. We also show that the intramural stress is robust and lies within 5% of its mean value for realistic variability of the material parameters. We observe that prestretch of elastin and collagen are most critical to maintaining homeostasis, while values of the material properties are most critical in determining response time. Finally, we outline several challenges to the G&R community for future work. We suggest that these tools provide the first systematic and efficient framework to quantify uncertainties and optimally identify G&R model parameters. PMID:23626380

  19. Latin hypercube approach to estimate uncertainty in ground water vulnerability

    USGS Publications Warehouse

    Gurdak, J.J.; McCray, J.E.; Thyne, G.; Qi, S.L.

    2007-01-01

    A methodology is proposed to quantify prediction uncertainty associated with ground water vulnerability models that were developed through an approach that coupled multivariate logistic regression with a geographic information system (GIS). This method uses Latin hypercube sampling (LHS) to illustrate the propagation of input error and estimate uncertainty associated with the logistic regression predictions of ground water vulnerability. Central to the proposed method is the assumption that prediction uncertainty in ground water vulnerability models is a function of input error propagation from uncertainty in the estimated logistic regression model coefficients (model error) and the values of explanatory variables represented in the GIS (data error). Input probability distributions that represent both model and data error sources of uncertainty were simultaneously sampled using a Latin hypercube approach with logistic regression calculations of probability of elevated nonpoint source contaminants in ground water. The resulting probability distribution represents the prediction intervals and associated uncertainty of the ground water vulnerability predictions. The method is illustrated through a ground water vulnerability assessment of the High Plains regional aquifer. Results of the LHS simulations reveal significant prediction uncertainties that vary spatially across the regional aquifer. Additionally, the proposed method enables a spatial deconstruction of the prediction uncertainty that can lead to improved prediction of ground water vulnerability. ?? 2007 National Ground Water Association.

  20. DOE-EPSCOR SPONSORED PROJECT FINAL REPORT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Jianting

    Concern over the quality of environmental management and restoration has motivated the model development for predicting water and solute transport in the vadose zone. Soil hydraulic properties are required inputs to subsurface models of water flow and contaminant transport in the vadose zone. Computer models are now routinely used in research and management to predict the movement of water and solutes into and through the vadose zone of soils. Such models can be used successfully only if reliable estimates of the soil hydraulic parameters are available. The hydraulic parameters considered in this project consist of the saturated hydraulic conductivity andmore » four parameters of the water retention curves. To quantify hydraulic parameters for heterogeneous soils is both difficult and time consuming. The overall objective of this project was to better quantify soil hydraulic parameters which are critical in predicting water flows and contaminant transport in the vadose zone through a comprehensive and quantitative study to predict heterogeneous soil hydraulic properties and the associated uncertainties. Systematic and quantitative consideration of the parametric heterogeneity and uncertainty can properly address and further reduce predictive uncertainty for contamination characterization and environmental restoration at DOE-managed sites. We conducted a comprehensive study to assess soil hydraulic parameter heterogeneity and uncertainty. We have addressed a number of important issues related to the soil hydraulic property characterizations. The main focus centered on new methods to characterize anisotropy of unsaturated hydraulic property typical of layered soil formations, uncertainty updating method, and artificial neural network base pedo-transfer functions to predict hydraulic parameters from easily available data. The work also involved upscaling of hydraulic properties applicable to large scale flow and contaminant transport modeling in the vadose zone and geostatistical characterization of hydraulic parameter heterogeneity. The project also examined the validity of the some simple average schemes for unsaturated hydraulic properties widely used in previous studies. A new suite of pedo-transfer functions were developed to improve the predictability of hydraulic parameters. We also explored the concept of tension-dependent hydraulic conductivity anisotropy of unsaturated layered soils. This project strengthens collaboration between researchers at the Desert Research Institute in the EPSCoR State of Nevada and their colleagues at the Pacific Northwest National Laboratory. The results of numerical simulations of a field injection experiment at Hanford site in this project could be used to provide insights to the DOE mission of appropriate contamination characterization and environmental remediation.« less

  1. Emulation and Sensitivity Analysis of the Community Multiscale Air Quality Model for a UK Ozone Pollution Episode.

    PubMed

    Beddows, Andrew V; Kitwiroon, Nutthida; Williams, Martin L; Beevers, Sean D

    2017-06-06

    Gaussian process emulation techniques have been used with the Community Multiscale Air Quality model, simulating the effects of input uncertainties on ozone and NO 2 output, to allow robust global sensitivity analysis (SA). A screening process ranked the effect of perturbations in 223 inputs, isolating the 30 most influential from emissions, boundary conditions (BCs), and reaction rates. Community Multiscale Air Quality (CMAQ) simulations of a July 2006 ozone pollution episode in the UK were made with input values for these variables plus ozone dry deposition velocity chosen according to a 576 point Latin hypercube design. Emulators trained on the output of these runs were used in variance-based SA of the model output to input uncertainties. Performing these analyses for every hour of a 21 day period spanning the episode and several days on either side allowed the results to be presented as a time series of sensitivity coefficients, showing how the influence of different input uncertainties changed during the episode. This is one of the most complex models to which these methods have been applied, and here, they reveal detailed spatiotemporal patterns of model sensitivities, with NO and isoprene emissions, NO 2 photolysis, ozone BCs, and deposition velocity being among the most influential input uncertainties.

  2. Evaluation of Uncertainty and Sensitivity in Environmental Modeling at a Radioactive Waste Management Site

    NASA Astrophysics Data System (ADS)

    Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.

    2002-05-01

    Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more information by quantifying the relative importance of each input parameter in predicting the model response. However, in these complex, high dimensional eco-system models, represented by the RWMS model, the dynamics of the systems can act in a non-linear manner. Quantitatively assessing the importance of input variables becomes more difficult as the dimensionality, the non-linearities, and the non-monotonicities of the model increase. Methods from data mining such as Multivariate Adaptive Regression Splines (MARS) and the Fourier Amplitude Sensitivity Test (FAST) provide tools that can be used in global sensitivity analysis in these high dimensional, non-linear situations. The enhanced interpretability of model output provided by the quantitative measures estimated by these global sensitivity analysis tools will be demonstrated using the RWMS model.

  3. Modeling the atmospheric chemistry of TICs

    NASA Astrophysics Data System (ADS)

    Henley, Michael V.; Burns, Douglas S.; Chynwat, Veeradej; Moore, William; Plitz, Angela; Rottmann, Shawn; Hearn, John

    2009-05-01

    An atmospheric chemistry model that describes the behavior and disposition of environmentally hazardous compounds discharged into the atmosphere was coupled with the transport and diffusion model, SCIPUFF. The atmospheric chemistry model was developed by reducing a detailed atmospheric chemistry mechanism to a simple empirical effective degradation rate term (keff) that is a function of important meteorological parameters such as solar flux, temperature, and cloud cover. Empirically derived keff functions that describe the degradation of target toxic industrial chemicals (TICs) were derived by statistically analyzing data generated from the detailed chemistry mechanism run over a wide range of (typical) atmospheric conditions. To assess and identify areas to improve the developed atmospheric chemistry model, sensitivity and uncertainty analyses were performed to (1) quantify the sensitivity of the model output (TIC concentrations) with respect to changes in the input parameters and (2) improve, where necessary, the quality of the input data based on sensitivity results. The model predictions were evaluated against experimental data. Chamber data were used to remove the complexities of dispersion in the atmosphere.

  4. Bridging the gap between theoretical ecology and real ecosystems: modeling invertebrate community composition in streams.

    PubMed

    Schuwirth, Nele; Reichert, Peter

    2013-02-01

    For the first time, we combine concepts of theoretical food web modeling, the metabolic theory of ecology, and ecological stoichiometry with the use of functional trait databases to predict the coexistence of invertebrate taxa in streams. We developed a mechanistic model that describes growth, death, and respiration of different taxa dependent on various environmental influence factors to estimate survival or extinction. Parameter and input uncertainty is propagated to model results. Such a model is needed to test our current quantitative understanding of ecosystem structure and function and to predict effects of anthropogenic impacts and restoration efforts. The model was tested using macroinvertebrate monitoring data from a catchment of the Swiss Plateau. Even without fitting model parameters, the model is able to represent key patterns of the coexistence structure of invertebrates at sites varying in external conditions (litter input, shading, water quality). This confirms the suitability of the model concept. More comprehensive testing and resulting model adaptations will further increase the predictive accuracy of the model.

  5. Probabilistic accident consequence uncertainty analysis -- Uncertainty assessment for deposited material and external doses. Volume 2: Appendices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goossens, L.H.J.; Kraan, B.C.P.; Cooke, R.M.

    1997-12-01

    The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library ofmore » uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA deposited material and external dose models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the panel on deposited material and external doses, (4) short biographies of the experts, and (5) the aggregated results of their responses.« less

  6. Uncertainties of isoprene emissions in the MEGAN model estimated for a coniferous and broad-leaved mixed forest in Southern China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Situ, S.; Wang, Xuemei; Guenther, Alex B.

    2014-12-01

    Using local observed emission factor, meteorological data, vegetation 5 information and dynamic MODIS LAI, MEGANv2.1 was constrained to predict the isoprene emission from Dinghushan forest in the Pearl River Delta region during a field campaign in November 2008, and the uncertainties in isoprene emission estimates were quantified by the Monte Carlo approach. The results indicate that MEGAN can predict the isoprene emission reasonably during the campaign, and the mean value of isoprene emission is 2.35 mg m-2 h-1 in daytime. There are high uncertainties associated with the MEGAN inputs and calculated parameters, and the relative error can be as highmore » as -89 to 111% for a 95% confidence interval. The emission factor of broadleaf trees and the activity factor accounting for light and temperature dependence are the most important contributors to the uncertainties in isoprene emission estimated for the Dinghushan forest during the campaign. The results also emphasize the importance of accurate observed PAR and temperature to reduce the uncertainties in isoprene emission estimated by model, because the MEGAN model activity factor accounting for light and temperature dependence is highly sensitive to PAR and temperature.« less

  7. Sensitivity and uncertainty analysis for Abreu & Johnson numerical vapor intrusion model.

    PubMed

    Ma, Jie; Yan, Guangxu; Li, Haiyan; Guo, Shaohui

    2016-03-05

    This study conducted one-at-a-time (OAT) sensitivity and uncertainty analysis for a numerical vapor intrusion model for nine input parameters, including soil porosity, soil moisture, soil air permeability, aerobic biodegradation rate, building depressurization, crack width, floor thickness, building volume, and indoor air exchange rate. Simulations were performed for three soil types (clay, silt, and sand), two source depths (3 and 8m), and two source concentrations (1 and 400 g/m(3)). Model sensitivity and uncertainty for shallow and high-concentration vapor sources (3m and 400 g/m(3)) are much smaller than for deep and low-concentration sources (8m and 1g/m(3)). For high-concentration sources, soil air permeability, indoor air exchange rate, and building depressurization (for high permeable soil like sand) are key contributors to model output uncertainty. For low-concentration sources, soil porosity, soil moisture, aerobic biodegradation rate and soil gas permeability are key contributors to model output uncertainty. Another important finding is that impacts of aerobic biodegradation on vapor intrusion potential of petroleum hydrocarbons are negligible when vapor source concentration is high, because of insufficient oxygen supply that limits aerobic biodegradation activities. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Probabilistic accident consequence uncertainty analysis -- Late health effects uncertain assessment. Volume 2: Appendices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Little, M.P.; Muirhead, C.R.; Goossens, L.H.J.

    1997-12-01

    The development of two new probabilistic accident consequence codes, MACCS and COSYMA, was completed in 1990. These codes estimate the consequence from the accidental releases of radiological material from hypothesized accidents at nuclear installations. In 1991, the US Nuclear Regulatory Commission and the Commission of the European Communities began cosponsoring a joint uncertainty analysis of the two codes. The ultimate objective of this joint effort was to systematically develop credible and traceable uncertainty distributions for the respective code input variables. A formal expert judgment elicitation and evaluation process was identified as the best technology available for developing a library ofmore » uncertainty distributions for these consequence parameters. This report focuses on the results of the study to develop distribution for variables related to the MACCS and COSYMA late health effects models. This volume contains appendices that include (1) a summary of the MACCS and COSYMA consequence codes, (2) the elicitation questionnaires and case structures, (3) the rationales and results for the expert panel on late health effects, (4) short biographies of the experts, and (5) the aggregated results of their responses.« less

  9. On uncertainty quantification of lithium-ion batteries: Application to an LiC6/LiCoO2 cell

    NASA Astrophysics Data System (ADS)

    Hadigol, Mohammad; Maute, Kurt; Doostan, Alireza

    2015-12-01

    In this work, a stochastic, physics-based model for Lithium-ion batteries (LIBs) is presented in order to study the effects of parametric model uncertainties on the cell capacity, voltage, and concentrations. To this end, the proposed uncertainty quantification (UQ) approach, based on sparse polynomial chaos expansions, relies on a small number of battery simulations. Within this UQ framework, the identification of most important uncertainty sources is achieved by performing a global sensitivity analysis via computing the so-called Sobol' indices. Such information aids in designing more efficient and targeted quality control procedures, which consequently may result in reducing the LIB production cost. An LiC6/LiCoO2 cell with 19 uncertain parameters discharged at 0.25C, 1C and 4C rates is considered to study the performance and accuracy of the proposed UQ approach. The results suggest that, for the considered cell, the battery discharge rate is a key factor affecting not only the performance variability of the cell, but also the determination of most important random inputs.

  10. The impact of 14-nm photomask uncertainties on computational lithography solutions

    NASA Astrophysics Data System (ADS)

    Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian

    2013-04-01

    Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.

  11. A review of surrogate models and their application to groundwater modeling

    NASA Astrophysics Data System (ADS)

    Asher, M. J.; Croke, B. F. W.; Jakeman, A. J.; Peeters, L. J. M.

    2015-08-01

    The spatially and temporally variable parameters and inputs to complex groundwater models typically result in long runtimes which hinder comprehensive calibration, sensitivity, and uncertainty analysis. Surrogate modeling aims to provide a simpler, and hence faster, model which emulates the specified output of a more complex model in function of its inputs and parameters. In this review paper, we summarize surrogate modeling techniques in three categories: data-driven, projection, and hierarchical-based approaches. Data-driven surrogates approximate a groundwater model through an empirical model that captures the input-output mapping of the original model. Projection-based models reduce the dimensionality of the parameter space by projecting the governing equations onto a basis of orthonormal vectors. In hierarchical or multifidelity methods the surrogate is created by simplifying the representation of the physical system, such as by ignoring certain processes, or reducing the numerical resolution. In discussing the application to groundwater modeling of these methods, we note several imbalances in the existing literature: a large body of work on data-driven approaches seemingly ignores major drawbacks to the methods; only a fraction of the literature focuses on creating surrogates to reproduce outputs of fully distributed groundwater models, despite these being ubiquitous in practice; and a number of the more advanced surrogate modeling methods are yet to be fully applied in a groundwater modeling context.

  12. The uncertainty of nitrous oxide emissions from grazed grasslands: A New Zealand case study

    NASA Astrophysics Data System (ADS)

    Kelliher, Francis M.; Henderson, Harold V.; Cox, Neil R.

    2017-01-01

    Agricultural soils emit nitrous oxide (N2O), a greenhouse gas and the primary source of nitrogen oxides which deplete stratospheric ozone. Agriculture has been estimated to be the largest anthropogenic N2O source. In New Zealand (NZ), pastoral agriculture uses half the land area. To estimate the annual N2O emissions from NZ's agricultural soils, the nitrogen (N) inputs have been determined and multiplied by an emission factor (EF), the mass fraction of N inputs emitted as N2Osbnd N. To estimate the associated uncertainty, we developed an analytical method. For comparison, another estimate was determined by Monte Carlo numerical simulation. For both methods, expert judgement was used to estimate the N input uncertainty. The EF uncertainty was estimated by meta-analysis of the results from 185 NZ field trials. For the analytical method, assuming a normal distribution and independence of the terms used to calculate the emissions (correlation = 0), the estimated 95% confidence limit was ±57%. When there was a normal distribution and an estimated correlation of 0.4 between N input and EF, the latter inferred from experimental data involving six NZ soils, the analytical method estimated a 95% confidence limit of ±61%. The EF data from 185 NZ field trials had a logarithmic normal distribution. For the Monte Carlo method, assuming a logarithmic normal distribution for EF, a normal distribution for the other terms and independence of all terms, the estimated 95% confidence limits were -32% and +88% or ±60% on average. When there were the same distribution assumptions and a correlation of 0.4 between N input and EF, the Monte Carlo method estimated 95% confidence limits were -34% and +94% or ±64% on average. For the analytical and Monte Carlo methods, EF uncertainty accounted for 95% and 83% of the emissions uncertainty when the correlation between N input and EF was 0 and 0.4, respectively. As the first uncertainty analysis of an agricultural soils N2O emissions inventory using "country-specific" field trials to estimate EF uncertainty, this can be a potentially informative case study for the international scientific community.

  13. iTOUGH2 v7.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINSTERLE, STEFAN; JUNG, YOOJIN; KOWALSKY, MICHAEL

    2016-09-15

    iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional, multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. iTOUGH2 performs sensitivity analyses, data-worth analyses, parameter estimation, and uncertainty propagation analyses in geosciences and reservoir engineering and other application areas. iTOUGH2 supports a number of different combinations of fluids and components (equation-of-state (EOS) modules). In addition, the optimization routines implemented in iTOUGH2 can also be used for sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files using the PEST protocol. iTOUGH2 solves the inverse problem bymore » minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative-free, gradient-based, and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlo simulations for uncertainty propagation analyses. A detailed residual and error analysis is provided. This upgrade includes (a) global sensitivity analysis methods, (b) dynamic memory allocation (c) additional input features and output analyses, (d) increased forward simulation capabilities, (e) parallel execution on multicore PCs and Linux clusters, and (f) bug fixes. More details can be found at http://esd.lbl.gov/iTOUGH2.« less

  14. Reduced uncertainty of regional scale CLM predictions of net carbon fluxes and leaf area indices with estimated plant-specific parameters

    NASA Astrophysics Data System (ADS)

    Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry

    2016-04-01

    Reliable estimates of carbon fluxes and states at regional scales are required to reduce uncertainties in regional carbon balance estimates and to support decision making in environmental politics. In this work the Community Land Model version 4.5 (CLM4.5-BGC) was applied at a high spatial resolution (1 km2) for the Rur catchment in western Germany. In order to improve the model-data consistency of net ecosystem exchange (NEE) and leaf area index (LAI) for this study area, five plant functional type (PFT)-specific CLM4.5-BGC parameters were estimated with time series of half-hourly NEE data for one year in 2011/2012, using the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm, a Markov Chain Monte Carlo (MCMC) approach. The parameters were estimated separately for four different plant functional types (needleleaf evergreen temperate tree, broadleaf deciduous temperate tree, C3-grass and C3-crop) at four different sites. The four sites are located inside or close to the Rur catchment. We evaluated modeled NEE for one year in 2012/2013 with NEE measured at seven eddy covariance sites in the catchment, including the four parameter estimation sites. Modeled LAI was evaluated by means of LAI derived from remotely sensed RapidEye images of about 18 days in 2011/2012. Performance indices were based on a comparison between measurements and (i) a reference run with CLM default parameters, and (ii) a 60 instance CLM ensemble with parameters sampled from the DREAM posterior probability density functions (pdfs). The difference between the observed and simulated NEE sum reduced 23% if estimated parameters instead of default parameters were used as input. The mean absolute difference between modeled and measured LAI was reduced by 59% on average. Simulated LAI was not only improved in terms of the absolute value but in some cases also in terms of the timing (beginning of vegetation onset), which was directly related to a substantial improvement of the NEE estimates in spring. In order to obtain a more comprehensive estimate of the model uncertainty, a second CLM ensemble was set up, where initial conditions and atmospheric forcings were perturbed in addition to the parameter estimates. This resulted in very high standard deviations (STD) of the modeled annual NEE sums for C3-grass and C3-crop PFTs, ranging between 24.1 and 225.9 gC m-2 y-1, compared to STD = 0.1 - 3.4 gC m-2 y-1 (effect of parameter uncertainty only, without additional perturbation of initial states and atmospheric forcings). The higher spread of modeled NEE for the C3-crop and C3-grass indicated that the model uncertainty was notably higher for those PFTs compared to the forest-PFTs. Our findings highlight the potential of parameter and uncertainty estimation to support the understanding and further development of land surface models such as CLM.

  15. Uncertainty analysis of least-cost modeling for designing wildlife linkages.

    PubMed

    Beier, Paul; Majka, Daniel R; Newell, Shawn L

    2009-12-01

    Least-cost models for focal species are widely used to design wildlife corridors. To evaluate the least-cost modeling approach used to develop 15 linkage designs in southern California, USA, we assessed robustness of the largest and least constrained linkage. Species experts parameterized models for eight species with weights for four habitat factors (land cover, topographic position, elevation, road density) and resistance values for each class within a factor (e.g., each class of land cover). Each model produced a proposed corridor for that species. We examined the extent to which uncertainty in factor weights and class resistance values affected two key conservation-relevant outputs, namely, the location and modeled resistance to movement of each proposed corridor. To do so, we compared the proposed corridor to 13 alternative corridors created with parameter sets that spanned the plausible ranges of biological uncertainty in these parameters. Models for five species were highly robust (mean overlap 88%, little or no increase in resistance). Although the proposed corridors for the other three focal species overlapped as little as 0% (mean 58%) of the alternative corridors, resistance in the proposed corridors for these three species was rarely higher than resistance in the alternative corridors (mean difference was 0.025 on a scale of 1 10; worst difference was 0.39). As long as the model had the correct rank order of resistance values and factor weights, our results suggest that the predicted corridor is robust to uncertainty. The three carnivore focal species, alone or in combination, were not effective umbrellas for the other focal species. The carnivore corridors failed to overlap the predicted corridors of most other focal species and provided relatively high resistance for the other focal species (mean increase of 2.7 resistance units). Least-cost modelers should conduct uncertainty analysis so that decision-makers can appreciate the potential impact of model uncertainty on conservation decisions. Our approach to uncertainty analysis (which can be called a worst-case scenario approach) is appropriate for complex models in which distribution of the input parameters cannot be specified.

  16. Influence of model reduction on uncertainty of flood inundation predictions

    NASA Astrophysics Data System (ADS)

    Romanowicz, R. J.; Kiczko, A.; Osuch, M.

    2012-04-01

    Derivation of flood risk maps requires an estimation of the maximum inundation extent for a flood with an assumed probability of exceedence, e.g. a 100 or 500 year flood. The results of numerical simulations of flood wave propagation are used to overcome the lack of relevant observations. In practice, deterministic 1-D models are used for flow routing, giving a simplified image of a flood wave propagation process. The solution of a 1-D model depends on the simplifications to the model structure, the initial and boundary conditions and the estimates of model parameters which are usually identified using the inverse problem based on the available noisy observations. Therefore, there is a large uncertainty involved in the derivation of flood risk maps. In this study we examine the influence of model structure simplifications on estimates of flood extent for the urban river reach. As the study area we chose the Warsaw reach of the River Vistula, where nine bridges and several dikes are located. The aim of the study is to examine the influence of water structures on the derived model roughness parameters, with all the bridges and dikes taken into account, with a reduced number and without any water infrastructure. The results indicate that roughness parameter values of a 1-D HEC-RAS model can be adjusted for the reduction in model structure. However, the price we pay is the model robustness. Apart from a relatively simple question regarding reducing model structure, we also try to answer more fundamental questions regarding the relative importance of input, model structure simplification, parametric and rating curve uncertainty to the uncertainty of flood extent estimates. We apply pseudo-Bayesian methods of uncertainty estimation and Global Sensitivity Analysis as the main methodological tools. The results indicate that the uncertainties have a substantial influence on flood risk assessment. In the paper we present a simplified methodology allowing the influence of that uncertainty to be assessed. This work was supported by National Science Centre of Poland (grant 2011/01/B/ST10/06866).

  17. Development and Implementation of a Formal Framework for Bottom-up Uncertainty Analysis of Input Emissions: Case Study of Residential Wood Combustion

    NASA Astrophysics Data System (ADS)

    Zhao, S.; Mashayekhi, R.; Saeednooran, S.; Hakami, A.; Ménard, R.; Moran, M. D.; Zhang, J.

    2016-12-01

    We have developed a formal framework for documentation, quantification, and propagation of uncertainties in upstream emissions inventory data at various stages leading to the generation of model-ready gridded emissions through emissions processing software such as the EPA's SMOKE (Sparse Matrix Operator Kernel Emissions) system. To illustrate this framework we present a proof-of-concept case study of a bottom-up quantitative assessment of uncertainties in emissions from residential wood combustion (RWC) in the U.S. and Canada. Uncertainties associated with key inventory parameters are characterized based on existing information sources, including the American Housing Survey (AHS) from the U.S. Census Bureau, Timber Products Output (TPO) surveys from the U.S. Forest Service, TNS Canadian Facts surveys, and the AP-42 emission factor document from the U.S. EPA. The propagation of uncertainties is based on Monte Carlo simulation code external to SMOKE. Latin Hypercube Sampling (LHS) is implemented to generate a set of random realizations of each RWC inventory parameter, for which the uncertainties are assumed to be normally distributed. Random realizations are also obtained for each RWC temporal and chemical speciation profile and spatial surrogate field external to SMOKE using the LHS approach. SMOKE outputs for primary emissions (e.g., CO, VOC) using both RWC emission inventory realizations and perturbed temporal and chemical profiles and spatial surrogates show relative uncertainties of about 30-50% across the U.S. and about 70-100% across Canada. Positive skewness values (up to 2.7) and variable kurtosis values (up to 4.8) were also found. Spatial allocation contributes significantly to the overall uncertainty, particularly in Canada. By applying this framework we are able to produce random realizations of model-ready gridded emissions that along with available meteorological ensembles can be used to propagate uncertainties through chemical transport models. The approach described here provides an effective means for formal quantification of uncertainties in estimated emissions from various source sectors and for continuous documentation, assessment, and reduction of emission uncertainties.

  18. Microbial models with data-driven parameters predict stronger soil carbon responses to climate change.

    PubMed

    Hararuk, Oleksandra; Smith, Matthew J; Luo, Yiqi

    2015-06-01

    Long-term carbon (C) cycle feedbacks to climate depend on the future dynamics of soil organic carbon (SOC). Current models show low predictive accuracy at simulating contemporary SOC pools, which can be improved through parameter estimation. However, major uncertainty remains in global soil responses to climate change, particularly uncertainty in how the activity of soil microbial communities will respond. To date, the role of microbes in SOC dynamics has been implicitly described by decay rate constants in most conventional global carbon cycle models. Explicitly including microbial biomass dynamics into C cycle model formulations has shown potential to improve model predictive performance when assessed against global SOC databases. This study aimed to data-constrained parameters of two soil microbial models, evaluate the improvements in performance of those calibrated models in predicting contemporary carbon stocks, and compare the SOC responses to climate change and their uncertainties between microbial and conventional models. Microbial models with calibrated parameters explained 51% of variability in the observed total SOC, whereas a calibrated conventional model explained 41%. The microbial models, when forced with climate and soil carbon input predictions from the 5th Coupled Model Intercomparison Project (CMIP5), produced stronger soil C responses to 95 years of climate change than any of the 11 CMIP5 models. The calibrated microbial models predicted between 8% (2-pool model) and 11% (4-pool model) soil C losses compared with CMIP5 model projections which ranged from a 7% loss to a 22.6% gain. Lastly, we observed unrealistic oscillatory SOC dynamics in the 2-pool microbial model. The 4-pool model also produced oscillations, but they were less prominent and could be avoided, depending on the parameter values. © 2014 John Wiley & Sons Ltd.

  19. Cell death, perfusion and electrical parameters are critical in models of hepatic radiofrequency ablation

    PubMed Central

    Hall, Sheldon K.; Ooi, Ean H.; Payne, Stephen J.

    2015-01-01

    Abstract Purpose: A sensitivity analysis has been performed on a mathematical model of radiofrequency ablation (RFA) in the liver. The purpose of this is to identify the most important parameters in the model, defined as those that produce the largest changes in the prediction. This is important in understanding the role of uncertainty and when comparing the model predictions to experimental data. Materials and methods: The Morris method was chosen to perform the sensitivity analysis because it is ideal for models with many parameters or that take a significant length of time to obtain solutions. A comprehensive literature review was performed to obtain ranges over which the model parameters are expected to vary, crucial input information. Results: The most important parameters in predicting the ablation zone size in our model of RFA are those representing the blood perfusion, electrical conductivity and the cell death model. The size of the 50 °C isotherm is sensitive to the electrical properties of tissue while the heat source is active, and to the thermal parameters during cooling. Conclusions: The parameter ranges chosen for the sensitivity analysis are believed to represent all that is currently known about their values in combination. The Morris method is able to compute global parameter sensitivities taking into account the interaction of all parameters, something that has not been done before. Research is needed to better understand the uncertainties in the cell death, electrical conductivity and perfusion models, but the other parameters are only of second order, providing a significant simplification. PMID:26000972

  20. Assessing risk of baleen whale hearing loss from seismic surveys: The effect of uncertainty and individual variation.

    PubMed

    Gedamke, Jason; Gales, Nick; Frydman, Sascha

    2011-01-01

    The potential for seismic airgun "shots" to cause acoustic trauma in marine mammals is poorly understood. There are just two empirical measurements of temporary threshold shift (TTS) onset levels from airgun-like sounds in odontocetes. Considering these limited data, a model was developed examining the impact of individual variability and uncertainty on risk assessment of baleen whale TTS from seismic surveys. In each of 100 simulations: 10000 "whales" are assigned TTS onset levels accounting for: inter-individual variation; uncertainty over the population's mean; and uncertainty over weighting of odontocete data to obtain baleen whale onset levels. Randomly distributed whales are exposed to one seismic survey passage with cumulative exposure level calculated. In the base scenario, 29% of whales (5th/95th percentiles of 10%/62%) approached to 1-1.2 km range were exposed to levels sufficient for TTS onset. By comparison, no whales are at risk outside 0.6 km when uncertainty and variability are not considered. Potentially "exposure altering" parameters (movement, avoidance, surfacing, and effective quiet) were also simulated. Until more research refines model inputs, the results suggest a reasonable likelihood that whales at a kilometer or more from seismic surveys could potentially be susceptible to TTS and demonstrate that the large impact uncertainty and variability can have on risk assessment.

  1. Sketching Uncertainty into Simulations.

    PubMed

    Ribicic, H; Waser, J; Gurbat, R; Sadransky, B; Groller, M E

    2012-12-01

    In a variety of application areas, the use of simulation steering in decision making is limited at best. Research focusing on this problem suggests that most user interfaces are too complex for the end user. Our goal is to let users create and investigate multiple, alternative scenarios without the need for special simulation expertise. To simplify the specification of parameters, we move from a traditional manipulation of numbers to a sketch-based input approach. Users steer both numeric parameters and parameters with a spatial correspondence by sketching a change onto the rendering. Special visualizations provide immediate visual feedback on how the sketches are transformed into boundary conditions of the simulation models. Since uncertainty with respect to many intertwined parameters plays an important role in planning, we also allow the user to intuitively setup complete value ranges, which are then automatically transformed into ensemble simulations. The interface and the underlying system were developed in collaboration with experts in the field of flood management. The real-world data they have provided has allowed us to construct scenarios used to evaluate the system. These were presented to a variety of flood response personnel, and their feedback is discussed in detail in the paper. The interface was found to be intuitive and relevant, although a certain amount of training might be necessary.

  2. Uncertainty analysis of a groundwater flow model in east-central Florida

    USGS Publications Warehouse

    Sepúlveda, Nicasio; Doherty, John E.

    2014-01-01

    A groundwater flow model for east-central Florida has been developed to help water-resource managers assess the impact of increased groundwater withdrawals from the Floridan aquifer system on heads and spring flows originating from the Upper Floridan aquifer. The model provides a probabilistic description of predictions of interest to water-resource managers, given the uncertainty associated with system heterogeneity, the large number of input parameters, and a nonunique groundwater flow solution. The uncertainty associated with these predictions can then be considered in decisions with which the model has been designed to assist. The “Null Space Monte Carlo” method is a stochastic probabilistic approach used to generate a suite of several hundred parameter field realizations, each maintaining the model in a calibrated state, and each considered to be hydrogeologically plausible. The results presented herein indicate that the model’s capacity to predict changes in heads or spring flows that originate from increased groundwater withdrawals is considerably greater than its capacity to predict the absolute magnitudes of heads or spring flows. Furthermore, the capacity of the model to make predictions that are similar in location and in type to those in the calibration dataset exceeds its capacity to make predictions of different types at different locations. The quantification of these outcomes allows defensible use of the modeling process in support of future water-resources decisions. The model allows the decision-making process to recognize the uncertainties, and the spatial/temporal variability of uncertainties that are associated with predictions of future system behavior in a complex hydrogeological context.

  3. Uncertainty analysis of a groundwater flow model in East-central Florida.

    PubMed

    Sepúlveda, Nicasio; Doherty, John

    2015-01-01

    A groundwater flow model for east-central Florida has been developed to help water-resource managers assess the impact of increased groundwater withdrawals from the Floridan aquifer system on heads and spring flows originating from the Upper Floridan Aquifer. The model provides a probabilistic description of predictions of interest to water-resource managers, given the uncertainty associated with system heterogeneity, the large number of input parameters, and a nonunique groundwater flow solution. The uncertainty associated with these predictions can then be considered in decisions with which the model has been designed to assist. The "Null Space Monte Carlo" method is a stochastic probabilistic approach used to generate a suite of several hundred parameter field realizations, each maintaining the model in a calibrated state, and each considered to be hydrogeologically plausible. The results presented herein indicate that the model's capacity to predict changes in heads or spring flows that originate from increased groundwater withdrawals is considerably greater than its capacity to predict the absolute magnitudes of heads or spring flows. Furthermore, the capacity of the model to make predictions that are similar in location and in type to those in the calibration dataset exceeds its capacity to make predictions of different types at different locations. The quantification of these outcomes allows defensible use of the modeling process in support of future water-resources decisions. The model allows the decision-making process to recognize the uncertainties, and the spatial or temporal variability of uncertainties that are associated with predictions of future system behavior in a complex hydrogeological context. © 2014, National Ground Water Association.

  4. On the Application of Science Systems Engineering and Uncertainty Quantification for Ice Sheet Science and Sea Level Projections

    NASA Astrophysics Data System (ADS)

    Schlegel, Nicole-Jeanne; Boening, Carmen; Larour, Eric; Limonadi, Daniel; Schodlok, Michael; Seroussi, Helene; Watkins, Michael

    2017-04-01

    Research and development activities at the Jet Propulsion Laboratory (JPL) currently support the creation of a framework to formally evaluate the observational needs within earth system science. One of the pilot projects of this effort aims to quantify uncertainties in global mean sea level rise projections, due to contributions from the continental ice sheets. Here, we take advantage of established uncertainty quantification tools embedded within the JPL-University of California at Irvine Ice Sheet System Model (ISSM). We conduct sensitivity and Monte-Carlo style sampling experiments on forward simulations of the Greenland and Antarctic ice sheets. By varying internal parameters and boundary conditions of the system over both extreme and credible worst-case ranges, we assess the impact of the different parameter ranges on century-scale sea level rise projections. The results inform efforts to a) isolate the processes and inputs that are most responsible for determining ice sheet contribution to sea level; b) redefine uncertainty brackets for century-scale projections; and c) provide a prioritized list of measurements, along with quantitative information on spatial and temporal resolution, required for reducing uncertainty in future sea level rise projections. Results indicate that ice sheet mass loss is dependent on the spatial resolution of key boundary conditions - such as bedrock topography and melt rates at the ice-ocean interface. This work is performed at and supported by the California Institute of Technology's Jet Propulsion Laboratory. Supercomputing time is also supported through a contract with the National Aeronautics and Space Administration's Cryosphere program.

  5. Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty

    DOE PAGES

    Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.

    2016-09-12

    Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less

  6. Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.

    Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less

  7. Evaluation strategies and uncertainty calculation of isotope amount ratios measured by MC ICP-MS on the example of Sr.

    PubMed

    Horsky, Monika; Irrgeher, Johanna; Prohaska, Thomas

    2016-01-01

    This paper critically reviews the state-of-the-art of isotope amount ratio measurements by solution-based multi-collector inductively coupled plasma mass spectrometry (MC ICP-MS) and presents guidelines for corresponding data reduction strategies and uncertainty assessments based on the example of n((87)Sr)/n((86)Sr) isotope ratios. This ratio shows variation attributable to natural radiogenic processes and mass-dependent fractionation. The applied calibration strategies can display these differences. In addition, a proper statement of uncertainty of measurement, including all relevant influence quantities, is a metrological prerequisite. A detailed instructive procedure for the calculation of combined uncertainties is presented for Sr isotope amount ratios using three different strategies of correction for instrumental isotopic fractionation (IIF): traditional internal correction, standard-sample bracketing, and a combination of both, using Zr as internal standard. Uncertainties are quantified by means of a Kragten spreadsheet approach, including the consideration of correlations between individual input parameters to the model equation. The resulting uncertainties are compared with uncertainties obtained from the partial derivatives approach and Monte Carlo propagation of distributions. We obtain relative expanded uncertainties (U rel; k = 2) of n((87)Sr)/n((86)Sr) of < 0.03 %, when normalization values are not propagated. A comprehensive propagation, including certified values and the internal normalization ratio in nature, increases relative expanded uncertainties by about factor two and the correction for IIF becomes the major contributor.

  8. Uncertainty Quantification and Assessment of CO2 Leakage in Groundwater Aquifers

    NASA Astrophysics Data System (ADS)

    Carroll, S.; Mansoor, K.; Sun, Y.; Jones, E.

    2011-12-01

    Complexity of subsurface aquifers and the geochemical reactions that control drinking water compositions complicate our ability to estimate the impact of leaking CO2 on groundwater quality. We combined lithologic field data from the High Plains Aquifer, numerical simulations, and uncertainty quantification analysis to assess the role of aquifer heterogeneity and physical transport on the extent of CO2 impacted plume over a 100-year period. The High Plains aquifer is a major aquifer over much of the central United States where CO2 may be sequestered in depleted oil and gas reservoirs or deep saline formations. Input parameters considered included, aquifer heterogeneity, permeability, porosity, regional groundwater flow, CO2 and TDS leakage rates over time, and the number of leakage source points. Sensitivity analysis suggest that variations in sand and clay permeability, correlation lengths, van Genuchten parameters, and CO2 leakage rate have the greatest impact on impacted volume or maximum distance from the leak source. A key finding is that relative sensitivity of the parameters changes over the 100-year period. Reduced order models developed from regression of the numerical simulations show that volume of the CO2-impacted aquifer increases over time with 2 order of magnitude variance.

  9. Application of fuzzy system theory in addressing the presence of uncertainties

    NASA Astrophysics Data System (ADS)

    Yusmye, A. Y. N.; Goh, B. Y.; Adnan, N. F.; Ariffin, A. K.

    2015-02-01

    In this paper, the combinations of fuzzy system theory with the finite element methods are present and discuss to deal with the uncertainties. The present of uncertainties is needed to avoid for prevent the failure of the material in engineering. There are three types of uncertainties, which are stochastic, epistemic and error uncertainties. In this paper, the epistemic uncertainties have been considered. For the epistemic uncertainty, it exists as a result of incomplete information and lack of knowledge or data. Fuzzy system theory is a non-probabilistic method, and this method is most appropriate to interpret the uncertainty compared to statistical approach when the deal with the lack of data. Fuzzy system theory contains a number of processes started from converting the crisp input to fuzzy input through fuzzification process and followed by the main process known as mapping process. The term mapping here means that the logical relationship between two or more entities. In this study, the fuzzy inputs are numerically integrated based on extension principle method. In the final stage, the defuzzification process is implemented. Defuzzification is an important process to allow the conversion of the fuzzy output to crisp outputs. Several illustrative examples are given and from the simulation, the result showed that propose the method produces more conservative results comparing with the conventional finite element method.

  10. Best Practices for Reduction of Uncertainty in CFD Results

    NASA Technical Reports Server (NTRS)

    Mendenhall, Michael R.; Childs, Robert E.; Morrison, Joseph H.

    2003-01-01

    This paper describes a proposed best-practices system that will present expert knowledge in the use of CFD. The best-practices system will include specific guidelines to assist the user in problem definition, input preparation, grid generation, code selection, parameter specification, and results interpretation. The goal of the system is to assist all CFD users in obtaining high quality CFD solutions with reduced uncertainty and at lower cost for a wide range of flow problems. The best-practices system will be implemented as a software product which includes an expert system made up of knowledge databases of expert information with specific guidelines for individual codes and algorithms. The process of acquiring expert knowledge is discussed, and help from the CFD community is solicited. Benefits and challenges associated with this project are examined.

  11. Sensitivity analysis of periodic errors in heterodyne interferometry

    NASA Astrophysics Data System (ADS)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-03-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.

  12. Aerosol direct radiative effects over the northwest Atlantic, northwest Pacific, and North Indian Oceans: estimates based on in-situ chemical and optical measurements and chemical transport modeling

    NASA Astrophysics Data System (ADS)

    Bates, T. S.; Anderson, T. L.; Baynard, T.; Bond, T.; Boucher, O.; Carmichael, G.; Clarke, A.; Erlick, C.; Guo, H.; Horowitz, L.; Howell, S.; Kulkarni, S.; Maring, H.; McComiskey, A.; Middlebrook, A.; Noone, K.; O'Dowd, C. D.; Ogren, J.; Penner, J.; Quinn, P. K.; Ravishankara, A. R.; Savoie, D. L.; Schwartz, S. E.; Shinozuka, Y.; Tang, Y.; Weber, R. J.; Wu, Y.

    2006-05-01

    The largest uncertainty in the radiative forcing of climate change over the industrial era is that due to aerosols, a substantial fraction of which is the uncertainty associated with scattering and absorption of shortwave (solar) radiation by anthropogenic aerosols in cloud-free conditions (IPCC, 2001). Quantifying and reducing the uncertainty in aerosol influences on climate is critical to understanding climate change over the industrial period and to improving predictions of future climate change for assumed emission scenarios. Measurements of aerosol properties during major field campaigns in several regions of the globe during the past decade are contributing to an enhanced understanding of atmospheric aerosols and their effects on light scattering and climate. The present study, which focuses on three regions downwind of major urban/population centers (North Indian Ocean (NIO) during INDOEX, the Northwest Pacific Ocean (NWP) during ACE-Asia, and the Northwest Atlantic Ocean (NWA) during ICARTT), incorporates understanding gained from field observations of aerosol distributions and properties into calculations of perturbations in radiative fluxes due to these aerosols. This study evaluates the current state of observations and of two chemical transport models (STEM and MOZART). Measurements of burdens, extinction optical depth (AOD), and direct radiative effect of aerosols (DRE - change in radiative flux due to total aerosols) are used as measurement-model check points to assess uncertainties. In-situ measured and remotely sensed aerosol properties for each region (mixing state, mass scattering efficiency, single scattering albedo, and angular scattering properties and their dependences on relative humidity) are used as input parameters to two radiative transfer models (GFDL and University of Michigan) to constrain estimates of aerosol radiative effects, with uncertainties in each step propagated through the analysis. Constraining the radiative transfer calculations by observational inputs increases the clear-sky, 24-h averaged AOD (34±8%), top of atmosphere (TOA) DRE (32±12%), and TOA direct climate forcing of aerosols (DCF - change in radiative flux due to anthropogenic aerosols) (37±7%) relative to values obtained with "a priori" parameterizations of aerosol loadings and properties (GFDL RTM). The resulting constrained clear-sky TOA DCF is -3.3±0.47, -14±2.6, -6.4±2.1 Wm-2 for the NIO, NWP, and NWA, respectively. With the use of constrained quantities (extensive and intensive parameters) the calculated uncertainty in DCF was 25% less than the "structural uncertainties" used in the IPCC-2001 global estimates of direct aerosol climate forcing. Such comparisons with observations and resultant reductions in uncertainties are essential for improving and developing confidence in climate model calculations incorporating aerosol forcing.

  13. Weighted Iterative Bayesian Compressive Sensing (WIBCS) for High Dimensional Polynomial Surrogate Construction

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.

    2016-12-01

    Surrogate construction has become a routine procedure when facing computationally intensive studies requiring multiple evaluations of complex models. In particular, surrogate models, otherwise called emulators or response surfaces, replace complex models in uncertainty quantification (UQ) studies, including uncertainty propagation (forward UQ) and parameter estimation (inverse UQ). Further, surrogates based on Polynomial Chaos (PC) expansions are especially convenient for forward UQ and global sensitivity analysis, also known as variance-based decomposition. However, the PC surrogate construction strongly suffers from the curse of dimensionality. With a large number of input parameters, the number of model simulations required for accurate surrogate construction is prohibitively large. Relatedly, non-adaptive PC expansions typically include infeasibly large number of basis terms far exceeding the number of available model evaluations. We develop Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth and PC surrogate construction leading to a sparse, high-dimensional PC surrogate with a very few model evaluations. The surrogate is then readily employed for global sensitivity analysis leading to further dimensionality reduction. Besides numerical tests, we demonstrate the construction on the example of Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  14. Elucidating Inherent Uncertainties in Data Assimilation for Predictions Incorporating Non-stationary Processes - Focus on Predictive Phenology

    NASA Astrophysics Data System (ADS)

    Lowman, L.; Barros, A. P.

    2017-12-01

    Data assimilation (DA) is the widely accepted procedure for estimating parameters within predictive models because of the adaptability and uncertainty quantification offered by Bayesian methods. DA applications in phenology modeling offer critical insights into how extreme weather or changes in climate impact the vegetation life cycle. Changes in leaf onset and senescence, root phenology, and intermittent leaf shedding imply large changes in the surface radiative, water, and carbon budgets at multiple scales. Models of leaf phenology require concurrent atmospheric and soil conditions to determine how biophysical plant properties respond to changes in temperature, light and water demand. Presently, climatological records for fraction of photosynthetically active radiation (FPAR) and leaf area index (LAI), the modelled states indicative of plant phenology, are not available. Further, DA models are typically trained on short periods of record (e.g. less than 10 years). Using limited records with a DA framework imposes non-stationarity on estimated parameters and the resulting predicted model states. This talk discusses how uncertainty introduced by the inherent non-stationarity of the modeled processes propagates through a land-surface hydrology model coupled to a predictive phenology model. How water demand is accounted for in the upscaling of DA model inputs and analysis period serves as a key source of uncertainty in the FPAR and LAI predictions. Parameters estimated from different DA effectively calibrate a plant water-use strategy within the land-surface hydrology model. For example, when extreme droughts are included in the DA period, the plants are trained to uptake water, transpire, and assimilate carbon under favorable conditions and quickly shut down at the onset of water stress.

  15. Surrogate-based optimization of hydraulic fracturing in pre-existing fracture networks

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Sun, Yunwei; Fu, Pengcheng; Carrigan, Charles R.; Lu, Zhiming; Tong, Charles H.; Buscheck, Thomas A.

    2013-08-01

    Hydraulic fracturing has been used widely to stimulate production of oil, natural gas, and geothermal energy in formations with low natural permeability. Numerical optimization of fracture stimulation often requires a large number of evaluations of objective functions and constraints from forward hydraulic fracturing models, which are computationally expensive and even prohibitive in some situations. Moreover, there are a variety of uncertainties associated with the pre-existing fracture distributions and rock mechanical properties, which affect the optimized decisions for hydraulic fracturing. In this study, a surrogate-based approach is developed for efficient optimization of hydraulic fracturing well design in the presence of natural-system uncertainties. The fractal dimension is derived from the simulated fracturing network as the objective for maximizing energy recovery sweep efficiency. The surrogate model, which is constructed using training data from high-fidelity fracturing models for mapping the relationship between uncertain input parameters and the fractal dimension, provides fast approximation of the objective functions and constraints. A suite of surrogate models constructed using different fitting methods is evaluated and validated for fast predictions. Global sensitivity analysis is conducted to gain insights into the impact of the input variables on the output of interest, and further used for parameter screening. The high efficiency of the surrogate-based approach is demonstrated for three optimization scenarios with different and uncertain ambient conditions. Our results suggest the critical importance of considering uncertain pre-existing fracture networks in optimization studies of hydraulic fracturing.

  16. Modeling the Effects of Irrigation on Land Surface Fluxes and States over the Conterminous United States: Sensitivity to Input Data and Model Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leng, Guoyong; Huang, Maoyi; Tang, Qiuhong

    2013-09-16

    Previous studies on irrigation impacts on land surface fluxes/states were mainly conducted as sensitivity experiments, with limited analysis of uncertainties from the input data and model irrigation schemes used. In this study, we calibrated and evaluated the performance of irrigation water use simulated by the Community Land Model version 4 (CLM4) against observations from agriculture census. We investigated the impacts of irrigation on land surface fluxes and states over the conterminous United States (CONUS) and explored possible directions of improvement. Specifically, we found large uncertainty in the irrigation area data from two widely used sources and CLM4 tended to producemore » unrealistically large temporal variations of irrigation demand for applications at the water resources region scale over CONUS. At seasonal to interannual time scales, the effects of irrigation on surface energy partitioning appeared to be large and persistent, and more pronounced in dry than wet years. Even with model calibration to yield overall good agreement with the irrigation amounts from the National Agricultural Statistics Service (NASS), differences between the two irrigation area datasets still dominate the differences in the interannual variability of land surface response to irrigation. Our results suggest that irrigation amount simulated by CLM4 can be improved by (1) calibrating model parameter values to account for regional differences in irrigation demand and (2) accurate representation of the spatial distribution and intensity of irrigated areas.« less

  17. A model-constrained Monte Carlo method for blind arterial input function estimation in dynamic contrast-enhanced MRI: II. In vivo results

    NASA Astrophysics Data System (ADS)

    Schabel, Matthias C.; DiBella, Edward V. R.; Jensen, Randy L.; Salzman, Karen L.

    2010-08-01

    Accurate quantification of pharmacokinetic model parameters in tracer kinetic imaging experiments requires correspondingly accurate determination of the arterial input function (AIF). Despite significant effort expended on methods of directly measuring patient-specific AIFs in modalities as diverse as dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), dynamic positron emission tomography (PET), and perfusion computed tomography (CT), fundamental and technical difficulties have made consistent and reliable achievement of that goal elusive. Here, we validate a new algorithm for AIF determination, the Monte Carlo blind estimation (MCBE) method (which is described in detail and characterized by extensive simulations in a companion paper), by comparing AIFs measured in DCE-MRI studies of eight brain tumor patients with results of blind estimation. Blind AIFs calculated with the MCBE method using a pool of concentration-time curves from a region of normal brain tissue were found to be quite similar to the measured AIFs, with statistically significant decreases in fit residuals observed in six of eight patients. Biases between the blind and measured pharmacokinetic parameters were the dominant source of error. Averaged over all eight patients, the mean biases were +7% in K trans, 0% in kep, -11% in vp and +10% in ve. Corresponding uncertainties (median absolute deviation from the best fit line) were 0.0043 min-1 in K trans, 0.0491 min-1 in kep, 0.29% in vp and 0.45% in ve. The use of a published population-averaged AIF resulted in larger mean biases in three of the four parameters (-23% in K trans, -22% in kep, -63% in vp), with the bias in ve unchanged, and led to larger uncertainties in all four parameters (0.0083 min-1 in K trans, 0.1038 min-1 in kep, 0.31% in vp and 0.95% in ve). When blind AIFs were calculated from a region of tumor tissue, statistically significant decreases in fit residuals were observed in all eight patients despite larger deviations of these blind AIFs from the measured AIFs. The observed decrease in root-mean-square fit residuals between the normal brain and tumor tissue blind AIFs suggests that the local blood supply in tumors is measurably different from that in normal brain tissue and that the proposed method is able to discriminate between the two. We have shown the feasibility of applying the MCBE algorithm to DCE-MRI data acquired in brain, finding generally good agreement with measured AIFs and decreased biases and uncertainties relative to the use of a population-averaged AIF. These results demonstrate that the MCBE algorithm is a useful alternative to direct AIF measurement in cases where acquisition of high-quality arterial input function data is difficult or impossible.

  18. Atmospheric opacity in the Schumann-Runge bands and the aeronomic dissociation of water vapor

    NASA Technical Reports Server (NTRS)

    Frederick, J. E.; Hudson, R. D.

    1980-01-01

    Knowledge of the aeronomic production of odd hydrogen in the dissociation of water vapor is limited by uncertainties in the penetration of solar irradiance in the Schumann-Runge bands of O2 and by incomplete information concerning the products of photolysis at Lyman alpha. Consideration of all error sources involved in computing the H2O dissociation rate in the wavelength region 175-200 nm leads to an estimated uncertainty of plus or minus 35% at an altitude of 90 km for an overhead sun. The uncertainty increases with decreasing altitude such that the true dissociation rate at 60 km for an overhead sun lies between 0.45 and 1.55 times the results computed using the best input parameters currently available. Calculations of the H2O dissociation rate by Lyman alpha should include the variation in O2 opacity across the solar line width. Neglect of this can lead to errors as large as 50% at altitudes where the process is the major source of odd hydrogen.

  19. Management of uncertainties on parameters elicited by experts - Applications to sea-level rise and to CO2 storage operations risk assessment

    NASA Astrophysics Data System (ADS)

    Manceau, Jean-Charles; Loschetter, Annick; Rohmer, Jérémy; Le Cozannet, Gonéri; Lary Louis, de; Guénan Thomas, Le; Ken, Hnottavange-Telleen

    2017-04-01

    In a context of high degree of uncertainty, when very few data are available, experts are commonly requested to provide their opinions on input parameters of risk assessment models. Not only might each expert express a certain degree of uncertainty on his/her own statements, but the set of information collected from the pool of experts introduces an additional level of uncertainty. It is indeed very unlikely that all experts agree on exactly the same data, especially regarding parameters needed for natural risk assessments. In some cases, their opinions may differ only slightly (e.g. the most plausible value for a parameter is similar for different experts, and they only disagree on the level of uncertainties that taint the said value) while on other cases they may express incompatible opinions for a same parameter. Dealing with these different kinds of uncertainties remains a challenge for assessing geological hazards or/and risks. Extra-probabilistic approaches (such as the Dempster-Shafer theory or the possibility theory) have shown to offer promising solutions for representing parameters on which the knowledge is limited. It is the case for instance when the available information prevents an expert from identifying a unique probability law to picture the total uncertainty. Moreover, such approaches are known to be particularly flexible when it comes to aggregating several and potentially conflicting opinions. We therefore propose to discuss the opportunity of applying these new theories for managing the uncertainties on parameters elicited by experts, by a comparison with the application of more classical probability approaches. The discussion is based on two different examples. The first example deals with the estimation of the injected CO2 plume extent in a reservoir in the context of CO2 geological storage. This estimation requires information on the effective porosity of the reservoir, which has been estimated by 14 different experts. The Dempster-Shafer theory has been used to represent and aggregate these pieces of information. The results of different aggregation rules as well as those of a classical probabilistic approach are compared with the purpose of highlighting the elements each of them could provide to the decision-maker (Manceau et al., 2016). The second example focuses on projections of future sea-level rise. Based on IPCC's constraints on the projection quantiles, and on the scientific community consensus level on the physical limits to future sea-level rise, a possibility distribution of the projections by 2100 under the RCP 8.5 scenario has been established. This possibility distribution has been confronted with a set of previously published probabilistic sea-level projections, with a focus on their ability to explore high ranges of sea-level rise (Le Cozannet et al., 2016). These two examples are complementary in the sense that they allow to address various aspects of the problem (e.g. representation of different types of information, conflict among experts, sources dependence). Moreover, we believe that the issues faced during these two experiences can be generalized to many risks/hazards assessment situations. References Manceau, JC., Loschetter, A., Rohmer, J., de Lary, L., Le Guénan, T., Hnottavange-Telleen, K. (2016). Dealing with uncertainty on parameters elicited from a pool of experts for CCS risk assessment. Congrès λμ 20 (St-Malo, France). Le Cozannet G., Manceau JC., Rohmer, J. (2016). Bounding probabilistic sea-level rise projections within the framework of the possibility theory. Accepted in Environmental Research Letters.

  20. Final Report, DOE Early Career Award: Predictive modeling of complex physical systems: new tools for statistical inference, uncertainty quantification, and experimental design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef

    Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesianmore » inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.« less

Top