Tyler Jon Smith; Lucy Amanda Marshall
2010-01-01
Model selection is an extremely important aspect of many hydrologic modeling studies because of the complexity, variability, and uncertainty that surrounds the current understanding of watershed-scale systems. However, development and implementation of a complete precipitation-runoff modeling framework, from model selection to calibration and uncertainty analysis, are...
Aerosol-type retrieval and uncertainty quantification from OMI data
NASA Astrophysics Data System (ADS)
Kauppi, Anu; Kolmonen, Pekka; Laine, Marko; Tamminen, Johanna
2017-11-01
We discuss uncertainty quantification for aerosol-type selection in satellite-based atmospheric aerosol retrieval. The retrieval procedure uses precalculated aerosol microphysical models stored in look-up tables (LUTs) and top-of-atmosphere (TOA) spectral reflectance measurements to solve the aerosol characteristics. The forward model approximations cause systematic differences between the modelled and observed reflectance. Acknowledging this model discrepancy as a source of uncertainty allows us to produce more realistic uncertainty estimates and assists the selection of the most appropriate LUTs for each individual retrieval.This paper focuses on the aerosol microphysical model selection and characterisation of uncertainty in the retrieved aerosol type and aerosol optical depth (AOD). The concept of model evidence is used as a tool for model comparison. The method is based on Bayesian inference approach, in which all uncertainties are described as a posterior probability distribution. When there is no single best-matching aerosol microphysical model, we use a statistical technique based on Bayesian model averaging to combine AOD posterior probability densities of the best-fitting models to obtain an averaged AOD estimate. We also determine the shared evidence of the best-matching models of a certain main aerosol type in order to quantify how plausible it is that it represents the underlying atmospheric aerosol conditions.The developed method is applied to Ozone Monitoring Instrument (OMI) measurements using a multiwavelength approach for retrieving the aerosol type and AOD estimate with uncertainty quantification for cloud-free over-land pixels. Several larger pixel set areas were studied in order to investigate the robustness of the developed method. We evaluated the retrieved AOD by comparison with ground-based measurements at example sites. We found that the uncertainty of AOD expressed by posterior probability distribution reflects the difficulty in model selection. The posterior probability distribution can provide a comprehensive characterisation of the uncertainty in this kind of problem for aerosol-type selection. As a result, the proposed method can account for the model error and also include the model selection uncertainty in the total uncertainty budget.
NASA Astrophysics Data System (ADS)
Määttä, A.; Laine, M.; Tamminen, J.; Veefkind, J. P.
2013-09-01
We study uncertainty quantification in remote sensing of aerosols in the atmosphere with top of the atmosphere reflectance measurements from the nadir-viewing Ozone Monitoring Instrument (OMI). Focus is on the uncertainty in aerosol model selection of pre-calculated aerosol models and on the statistical modelling of the model inadequacies. The aim is to apply statistical methodologies that improve the uncertainty estimates of the aerosol optical thickness (AOT) retrieval by propagating model selection and model error related uncertainties more realistically. We utilise Bayesian model selection and model averaging methods for the model selection problem and use Gaussian processes to model the smooth systematic discrepancies from the modelled to observed reflectance. The systematic model error is learned from an ensemble of operational retrievals. The operational OMI multi-wavelength aerosol retrieval algorithm OMAERO is used for cloud free, over land pixels of the OMI instrument with the additional Bayesian model selection and model discrepancy techniques. The method is demonstrated with four examples with different aerosol properties: weakly absorbing aerosols, forest fires over Greece and Russia, and Sahara dessert dust. The presented statistical methodology is general; it is not restricted to this particular satellite retrieval application.
NASA Astrophysics Data System (ADS)
Mockler, E. M.; Chun, K. P.; Sapriza-Azuri, G.; Bruen, M.; Wheater, H. S.
2016-11-01
Predictions of river flow dynamics provide vital information for many aspects of water management including water resource planning, climate adaptation, and flood and drought assessments. Many of the subjective choices that modellers make including model and criteria selection can have a significant impact on the magnitude and distribution of the output uncertainty. Hydrological modellers are tasked with understanding and minimising the uncertainty surrounding streamflow predictions before communicating the overall uncertainty to decision makers. Parameter uncertainty in conceptual rainfall-runoff models has been widely investigated, and model structural uncertainty and forcing data have been receiving increasing attention. This study aimed to assess uncertainties in streamflow predictions due to forcing data and the identification of behavioural parameter sets in 31 Irish catchments. By combining stochastic rainfall ensembles and multiple parameter sets for three conceptual rainfall-runoff models, an analysis of variance model was used to decompose the total uncertainty in streamflow simulations into contributions from (i) forcing data, (ii) identification of model parameters and (iii) interactions between the two. The analysis illustrates that, for our subjective choices, hydrological model selection had a greater contribution to overall uncertainty, while performance criteria selection influenced the relative intra-annual uncertainties in streamflow predictions. Uncertainties in streamflow predictions due to the method of determining parameters were relatively lower for wetter catchments, and more evenly distributed throughout the year when the Nash-Sutcliffe Efficiency of logarithmic values of flow (lnNSE) was the evaluation criterion.
Planning for robust reserve networks using uncertainty analysis
Moilanen, A.; Runge, M.C.; Elith, Jane; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.
2006-01-01
Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.
Metrics for evaluating performance and uncertainty of Bayesian network models
Bruce G. Marcot
2012-01-01
This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...
NASA Astrophysics Data System (ADS)
Määttä, A.; Laine, M.; Tamminen, J.; Veefkind, J. P.
2014-05-01
Satellite instruments are nowadays successfully utilised for measuring atmospheric aerosol in many applications as well as in research. Therefore, there is a growing need for rigorous error characterisation of the measurements. Here, we introduce a methodology for quantifying the uncertainty in the retrieval of aerosol optical thickness (AOT). In particular, we concentrate on two aspects: uncertainty due to aerosol microphysical model selection and uncertainty due to imperfect forward modelling. We apply the introduced methodology for aerosol optical thickness retrieval of the Ozone Monitoring Instrument (OMI) on board NASA's Earth Observing System (EOS) Aura satellite, launched in 2004. We apply statistical methodologies that improve the uncertainty estimates of the aerosol optical thickness retrieval by propagating aerosol microphysical model selection and forward model error more realistically. For the microphysical model selection problem, we utilise Bayesian model selection and model averaging methods. Gaussian processes are utilised to characterise the smooth systematic discrepancies between the measured and modelled reflectances (i.e. residuals). The spectral correlation is composed empirically by exploring a set of residuals. The operational OMI multi-wavelength aerosol retrieval algorithm OMAERO is used for cloud-free, over-land pixels of the OMI instrument with the additional Bayesian model selection and model discrepancy techniques introduced here. The method and improved uncertainty characterisation is demonstrated by several examples with different aerosol properties: weakly absorbing aerosols, forest fires over Greece and Russia, and Sahara desert dust. The statistical methodology presented is general; it is not restricted to this particular satellite retrieval application.
NASA Astrophysics Data System (ADS)
Zarindast, Atousa; Seyed Hosseini, Seyed Mohamad; Pishvaee, Mir Saman
2017-06-01
Robust supplier selection problem, in a scenario-based approach has been proposed, when the demand and exchange rates are subject to uncertainties. First, a deterministic multi-objective mixed integer linear programming is developed; then, the robust counterpart of the proposed mixed integer linear programming is presented using the recent extension in robust optimization theory. We discuss decision variables, respectively, by a two-stage stochastic planning model, a robust stochastic optimization planning model which integrates worst case scenario in modeling approach and finally by equivalent deterministic planning model. The experimental study is carried out to compare the performances of the three models. Robust model resulted in remarkable cost saving and it illustrated that to cope with such uncertainties, we should consider them in advance in our planning. In our case study different supplier were selected due to this uncertainties and since supplier selection is a strategic decision, it is crucial to consider these uncertainties in planning approach.
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
NASA Astrophysics Data System (ADS)
Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.
2017-12-01
Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.
A multi-fidelity analysis selection method using a constrained discrete optimization formulation
NASA Astrophysics Data System (ADS)
Stults, Ian C.
The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method which will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results. In many conceptual design programs, only limited data is available for quantifying model uncertainty. Because of this data sparsity, traditional probabilistic means for quantifying uncertainty should be reconsidered. This research proposes to instead quantify model uncertainty using an evidence theory formulation (also referred to as Dempster-Shafer theory) in lieu of the traditional probabilistic approach. Specific weaknesses in using evidence theory for quantifying model uncertainty are identified and addressed for the purposes of the Fidelity Selection Problem. A series of experiments was conducted to address these weaknesses using n-dimensional optimization test functions. These experiments found that model uncertainty present in analyses with 4 or fewer input variables could be effectively quantified using a strategic distribution creation method; if more than 4 input variables exist, a Frontier Finding Particle Swarm Optimization should instead be used. Once model uncertainty in contributing analysis code choices has been quantified, a selection method is required to determine which of these choices should be used in simulations. Because much of the selection done for engineering problems is driven by the physics of the problem, these are poor candidate problems for testing the true fitness of a candidate selection method. Specifically moderate and high dimensional problems' variability can often be reduced to only a few dimensions and scalability often cannot be easily addressed. For these reasons a simple academic function was created for the uncertainty quantification, and a canonical form of the Fidelity Selection Problem (FSP) was created. Fifteen best- and worst-case scenarios were identified in an effort to challenge the candidate selection methods both with respect to the characteristics of the tradeoff between time cost and model uncertainty and with respect to the stringency of the constraints and problem dimensionality. The results from this experiment show that a Genetic Algorithm (GA) was able to consistently find the correct answer, but under certain circumstances, a discrete form of Particle Swarm Optimization (PSO) was able to find the correct answer more quickly. To better illustrate how the uncertainty quantification and discrete optimization might be conducted for a "real world" problem, an illustrative example was conducted using gas turbine engines.
Management of California Oak Woodlands: Uncertainties and Modeling
Jay E. Noel; Richard P. Thompson
1995-01-01
A mathematical policy model of oak woodlands is presented. The model illustrates the policy uncertainties that exist in the management of oak woodlands. These uncertainties include: (1) selection of a policy criterion function, (2) woodland dynamics, (3) initial and final state of the woodland stock. The paper provides a review of each of the uncertainty issues. The...
NASA Astrophysics Data System (ADS)
Wang, Weizong; Berthelot, Antonin; Zhang, Quanzhi; Bogaerts, Annemie
2018-05-01
One of the main issues in plasma chemistry modeling is that the cross sections and rate coefficients are subject to uncertainties, which yields uncertainties in the modeling results and hence hinders the predictive capabilities. In this paper, we reveal the impact of these uncertainties on the model predictions of plasma-based dry reforming in a dielectric barrier discharge. For this purpose, we performed a detailed uncertainty analysis and sensitivity study. 2000 different combinations of rate coefficients, based on the uncertainty from a log-normal distribution, are used to predict the uncertainties in the model output. The uncertainties in the electron density and electron temperature are around 11% and 8% at the maximum of the power deposition for a 70% confidence level. Still, this can have a major effect on the electron impact rates and hence on the calculated conversions of CO2 and CH4, as well as on the selectivities of CO and H2. For the CO2 and CH4 conversion, we obtain uncertainties of 24% and 33%, respectively. For the CO and H2 selectivity, the corresponding uncertainties are 28% and 14%, respectively. We also identify which reactions contribute most to the uncertainty in the model predictions. In order to improve the accuracy and reliability of plasma chemistry models, we recommend using only verified rate coefficients, and we point out the need for dedicated verification experiments.
NASA Astrophysics Data System (ADS)
Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish
2018-06-01
Every model to characterise a real world process is affected by uncertainty. Selecting a suitable model is a vital aspect of engineering planning and design. Observation or input errors make the prediction of modelled responses more uncertain. By way of a recently developed attribution metric, this study is aimed at developing a method for analysing variability in model inputs together with model structure variability to quantify their relative contributions in typical hydrological modelling applications. The Quantile Flow Deviation (QFD) metric is used to assess these alternate sources of uncertainty. The Australian Water Availability Project (AWAP) precipitation data for four different Australian catchments is used to analyse the impact of spatial rainfall variability on simulated streamflow variability via the QFD. The QFD metric attributes the variability in flow ensembles to uncertainty associated with the selection of a model structure and input time series. For the case study catchments, the relative contribution of input uncertainty due to rainfall is higher than that due to potential evapotranspiration, and overall input uncertainty is significant compared to model structure and parameter uncertainty. Overall, this study investigates the propagation of input uncertainty in a daily streamflow modelling scenario and demonstrates how input errors manifest across different streamflow magnitudes.
NASA Astrophysics Data System (ADS)
Schöniger, Anneli; Wöhling, Thomas; Nowak, Wolfgang
2014-05-01
Bayesian model averaging ranks the predictive capabilities of alternative conceptual models based on Bayes' theorem. The individual models are weighted with their posterior probability to be the best one in the considered set of models. Finally, their predictions are combined into a robust weighted average and the predictive uncertainty can be quantified. This rigorous procedure does, however, not yet account for possible instabilities due to measurement noise in the calibration data set. This is a major drawback, since posterior model weights may suffer a lack of robustness related to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new statistical concept to account for measurement noise as source of uncertainty for the weights in Bayesian model averaging. Our suggested upgrade reflects the limited information content of data for the purpose of model selection. It allows us to assess the significance of the determined posterior model weights, the confidence in model selection, and the accuracy of the quantified predictive uncertainty. Our approach rests on a brute-force Monte Carlo framework. We determine the robustness of model weights against measurement noise by repeatedly perturbing the observed data with random realizations of measurement error. Then, we analyze the induced variability in posterior model weights and introduce this "weighting variance" as an additional term into the overall prediction uncertainty analysis scheme. We further determine the theoretical upper limit in performance of the model set which is imposed by measurement noise. As an extension to the merely relative model ranking, this analysis provides a measure of absolute model performance. To finally decide, whether better data or longer time series are needed to ensure a robust basis for model selection, we resample the measurement time series and assess the convergence of model weights for increasing time series length. We illustrate our suggested approach with an application to model selection between different soil-plant models following up on a study by Wöhling et al. (2013). Results show that measurement noise compromises the reliability of model ranking and causes a significant amount of weighting uncertainty, if the calibration data time series is not long enough to compensate for its noisiness. This additional contribution to the overall predictive uncertainty is neglected without our approach. Thus, we strongly advertise to include our suggested upgrade in the Bayesian model averaging routine.
Adaptive Modeling Procedure Selection by Data Perturbation.
Zhang, Yongli; Shen, Xiaotong
2015-10-01
Many procedures have been developed to deal with the high-dimensional problem that is emerging in various business and economics areas. To evaluate and compare these procedures, modeling uncertainty caused by model selection and parameter estimation has to be assessed and integrated into a modeling process. To do this, a data perturbation method estimates the modeling uncertainty inherited in a selection process by perturbing the data. Critical to data perturbation is the size of perturbation, as the perturbed data should resemble the original dataset. To account for the modeling uncertainty, we derive the optimal size of perturbation, which adapts to the data, the model space, and other relevant factors in the context of linear regression. On this basis, we develop an adaptive data-perturbation method that, unlike its nonadaptive counterpart, performs well in different situations. This leads to a data-adaptive model selection method. Both theoretical and numerical analysis suggest that the data-adaptive model selection method adapts to distinct situations in that it yields consistent model selection and optimal prediction, without knowing which situation exists a priori. The proposed method is applied to real data from the commodity market and outperforms its competitors in terms of price forecasting accuracy.
NASA Astrophysics Data System (ADS)
Noori, Roohollah; Safavi, Salman; Nateghi Shahrokni, Seyyed Afshin
2013-07-01
The five-day biochemical oxygen demand (BOD5) is one of the key parameters in water quality management. In this study, a novel approach, i.e., reduced-order adaptive neuro-fuzzy inference system (ROANFIS) model was developed for rapid estimation of BOD5. In addition, an uncertainty analysis of adaptive neuro-fuzzy inference system (ANFIS) and ROANFIS models was carried out based on Monte-Carlo simulation. Accuracy analysis of ANFIS and ROANFIS models based on both developed discrepancy ratio and threshold statistics revealed that the selected ROANFIS model was superior. Pearson correlation coefficient (R) and root mean square error for the best fitted ROANFIS model were 0.96 and 7.12, respectively. Furthermore, uncertainty analysis of the developed models indicated that the selected ROANFIS had less uncertainty than the ANFIS model and accurately forecasted BOD5 in the Sefidrood River Basin. Besides, the uncertainty analysis also showed that bracketed predictions by 95% confidence bound and d-factor in the testing steps for the selected ROANFIS model were 94% and 0.83, respectively.
Measures of GCM Performance as Functions of Model Parameters Affecting Clouds and Radiation
NASA Astrophysics Data System (ADS)
Jackson, C.; Mu, Q.; Sen, M.; Stoffa, P.
2002-05-01
This abstract is one of three related presentations at this meeting dealing with several issues surrounding optimal parameter and uncertainty estimation of model predictions of climate. Uncertainty in model predictions of climate depends in part on the uncertainty produced by model approximations or parameterizations of unresolved physics. Evaluating these uncertainties is computationally expensive because one needs to evaluate how arbitrary choices for any given combination of model parameters affects model performance. Because the computational effort grows exponentially with the number of parameters being investigated, it is important to choose parameters carefully. Evaluating whether a parameter is worth investigating depends on two considerations: 1) does reasonable choices of parameter values produce a large range in model response relative to observational uncertainty? and 2) does the model response depend non-linearly on various combinations of model parameters? We have decided to narrow our attention to selecting parameters that affect clouds and radiation, as it is likely that these parameters will dominate uncertainties in model predictions of future climate. We present preliminary results of ~20 to 30 AMIPII style climate model integrations using NCAR's CCM3.10 that show model performance as functions of individual parameters controlling 1) critical relative humidity for cloud formation (RHMIN), and 2) boundary layer critical Richardson number (RICR). We also explore various definitions of model performance that include some or all observational data sources (surface air temperature and pressure, meridional and zonal winds, clouds, long and short-wave cloud forcings, etc...) and evaluate in a few select cases whether the model's response depends non-linearly on the parameter values we have selected.
Protein construct storage: Bayesian variable selection and prediction with mixtures.
Clyde, M A; Parmigiani, G
1998-07-01
Determining optimal conditions for protein storage while maintaining a high level of protein activity is an important question in pharmaceutical research. A designed experiment based on a space-filling design was conducted to understand the effects of factors affecting protein storage and to establish optimal storage conditions. Different model-selection strategies to identify important factors may lead to very different answers about optimal conditions. Uncertainty about which factors are important, or model uncertainty, can be a critical issue in decision-making. We use Bayesian variable selection methods for linear models to identify important variables in the protein storage data, while accounting for model uncertainty. We also use the Bayesian framework to build predictions based on a large family of models, rather than an individual model, and to evaluate the probability that certain candidate storage conditions are optimal.
Overall uncertainty study of the hydrological impacts of climate change for a Canadian watershed
NASA Astrophysics Data System (ADS)
Chen, Jie; Brissette, FrançOis P.; Poulin, Annie; Leconte, Robert
2011-12-01
General circulation models (GCMs) and greenhouse gas emissions scenarios (GGES) are generally considered to be the two major sources of uncertainty in quantifying the climate change impacts on hydrology. Other sources of uncertainty have been given less attention. This study considers overall uncertainty by combining results from an ensemble of two GGES, six GCMs, five GCM initial conditions, four downscaling techniques, three hydrological model structures, and 10 sets of hydrological model parameters. Each climate projection is equally weighted to predict the hydrology on a Canadian watershed for the 2081-2100 horizon. The results show that the choice of GCM is consistently a major contributor to uncertainty. However, other sources of uncertainty, such as the choice of a downscaling method and the GCM initial conditions, also have a comparable or even larger uncertainty for some hydrological variables. Uncertainties linked to GGES and the hydrological model structure are somewhat less than those related to GCMs and downscaling techniques. Uncertainty due to the hydrological model parameter selection has the least important contribution among all the variables considered. Overall, this research underlines the importance of adequately covering all sources of uncertainty. A failure to do so may result in moderately to severely biased climate change impact studies. Results further indicate that the major contributors to uncertainty vary depending on the hydrological variables selected, and that the methodology presented in this paper is successful at identifying the key sources of uncertainty to consider for a climate change impact study.
Adaptive selection and validation of models of complex systems in the presence of uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell-Maupin, Kathryn; Oden, J. T.
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
Adaptive selection and validation of models of complex systems in the presence of uncertainty
Farrell-Maupin, Kathryn; Oden, J. T.
2017-08-01
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
Pereira, Paulo; Westgard, James O; Encarnação, Pedro; Seghatchian, Jerard; de Sousa, Gracinda
2015-02-01
Blood establishments routinely perform screening immunoassays to assess safety of the blood components. As with any other screening test, results have an inherent uncertainty. In blood establishments the major concern is the chance of false negatives, due to its possible impact on patients' health. This article briefly reviews GUM and diagnostic accuracy models for screening immunoassays, recommending a scheme to support the screening laboratories' staffs on the selection of a model considering the intended use of the screening results (i.e., post-transfusion safety). The discussion is grounded on a "risk-based thinking", risk being considered from the blood donor selection to the screening immunoassays. A combination of GUM and diagnostic accuracy models to evaluate measurement uncertainty in blood establishments is recommended. Copyright © 2014 Elsevier Ltd. All rights reserved.
Alonso, Ariel; Laenen, Annouschka
2013-05-01
Laenen, Alonso, and Molenberghs (2007) and Laenen, Alonso, Molenberghs, and Vangeneugden (2009) proposed a method to assess the reliability of rating scales in a longitudinal context. The methodology is based on hierarchical linear models, and reliability coefficients are derived from the corresponding covariance matrices. However, finding a good parsimonious model to describe complex longitudinal data is a challenging task. Frequently, several models fit the data equally well, raising the problem of model selection uncertainty. When model uncertainty is high one may resort to model averaging, where inferences are based not on one but on an entire set of models. We explored the use of different model building strategies, including model averaging, in reliability estimation. We found that the approach introduced by Laenen et al. (2007, 2009) combined with some of these strategies may yield meaningful results in the presence of high model selection uncertainty and when all models are misspecified, in so far as some of them manage to capture the most salient features of the data. Nonetheless, when all models omit prominent regularities in the data, misleading results may be obtained. The main ideas are further illustrated on a case study in which the reliability of the Hamilton Anxiety Rating Scale is estimated. Importantly, the ambit of model selection uncertainty and model averaging transcends the specific setting studied in the paper and may be of interest in other areas of psychometrics. © 2012 The British Psychological Society.
Tyler Jon Smith
2008-01-01
In Montana and much of the Rocky Mountain West, the single most important parameter in forecasting the controls on regional water resources is snowpack. Despite the heightened importance of snowpack, few studies have considered the representation of uncertainty in coupled snowmelt/hydrologic conceptual models. Uncertainty estimation provides a direct interpretation of...
Uncertain programming models for portfolio selection with uncertain returns
NASA Astrophysics Data System (ADS)
Zhang, Bo; Peng, Jin; Li, Shengguo
2015-10-01
In an indeterminacy economic environment, experts' knowledge about the returns of securities consists of much uncertainty instead of randomness. This paper discusses portfolio selection problem in uncertain environment in which security returns cannot be well reflected by historical data, but can be evaluated by the experts. In the paper, returns of securities are assumed to be given by uncertain variables. According to various decision criteria, the portfolio selection problem in uncertain environment is formulated as expected-variance-chance model and chance-expected-variance model by using the uncertainty programming. Within the framework of uncertainty theory, for the convenience of solving the models, some crisp equivalents are discussed under different conditions. In addition, a hybrid intelligent algorithm is designed in the paper to provide a general method for solving the new models in general cases. At last, two numerical examples are provided to show the performance and applications of the models and algorithm.
Dudaniec, Rachael Y; Worthington Wilmer, Jessica; Hanson, Jeffrey O; Warren, Matthew; Bell, Sarah; Rhodes, Jonathan R
2016-01-01
Landscape genetics lacks explicit methods for dealing with the uncertainty in landscape resistance estimation, which is particularly problematic when sample sizes of individuals are small. Unless uncertainty can be quantified, valuable but small data sets may be rendered unusable for conservation purposes. We offer a method to quantify uncertainty in landscape resistance estimates using multimodel inference as an improvement over single model-based inference. We illustrate the approach empirically using co-occurring, woodland-preferring Australian marsupials within a common study area: two arboreal gliders (Petaurus breviceps, and Petaurus norfolcensis) and one ground-dwelling antechinus (Antechinus flavipes). First, we use maximum-likelihood and a bootstrap procedure to identify the best-supported isolation-by-resistance model out of 56 models defined by linear and non-linear resistance functions. We then quantify uncertainty in resistance estimates by examining parameter selection probabilities from the bootstrapped data. The selection probabilities provide estimates of uncertainty in the parameters that drive the relationships between landscape features and resistance. We then validate our method for quantifying uncertainty using simulated genetic and landscape data showing that for most parameter combinations it provides sensible estimates of uncertainty. We conclude that small data sets can be informative in landscape genetic analyses provided uncertainty can be explicitly quantified. Being explicit about uncertainty in landscape genetic models will make results more interpretable and useful for conservation decision-making, where dealing with uncertainty is critical. © 2015 John Wiley & Sons Ltd.
Estimates of live-tree carbon stores in the Pacific Northwest are sensitive to model selection
Susanna L. Melson; Mark E. Harmon; Jeremy S. Fried; James B. Domingo
2011-01-01
Estimates of live-tree carbon stores are influenced by numerous uncertainties. One of them is model-selection uncertainty: one has to choose among multiple empirical equations and conversion factors that can be plausibly justified as locally applicable to calculate the carbon store from inventory measurements such as tree height and diameter at breast height (DBH)....
Model Selection for Monitoring CO2 Plume during Sequestration
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-12-31
The model selection method developed as part of this project mainly includes four steps: (1) assessing the connectivity/dynamic characteristics of a large prior ensemble of models, (2) model clustering using multidimensional scaling coupled with k-mean clustering, (3) model selection using the Bayes' rule in the reduced model space, (4) model expansion using iterative resampling of the posterior models. The fourth step expresses one of the advantages of the method: it provides a built-in means of quantifying the uncertainty in predictions made with the selected models. In our application to plume monitoring, by expanding the posterior space of models, the finalmore » ensemble of representations of geological model can be used to assess the uncertainty in predicting the future displacement of the CO2 plume. The software implementation of this approach is attached here.« less
Impact of petrophysical uncertainty on Bayesian hydrogeophysical inversion and model selection
NASA Astrophysics Data System (ADS)
Brunetti, Carlotta; Linde, Niklas
2018-01-01
Quantitative hydrogeophysical studies rely heavily on petrophysical relationships that link geophysical properties to hydrogeological properties and state variables. Coupled inversion studies are frequently based on the questionable assumption that these relationships are perfect (i.e., no scatter). Using synthetic examples and crosshole ground-penetrating radar (GPR) data from the South Oyster Bacterial Transport Site in Virginia, USA, we investigate the impact of spatially-correlated petrophysical uncertainty on inferred posterior porosity and hydraulic conductivity distributions and on Bayes factors used in Bayesian model selection. Our study shows that accounting for petrophysical uncertainty in the inversion (I) decreases bias of the inferred variance of hydrogeological subsurface properties, (II) provides more realistic uncertainty assessment and (III) reduces the overconfidence in the ability of geophysical data to falsify conceptual hydrogeological models.
Reusable launch vehicle model uncertainties impact analysis
NASA Astrophysics Data System (ADS)
Chen, Jiaye; Mu, Rongjun; Zhang, Xin; Deng, Yanpeng
2018-03-01
Reusable launch vehicle(RLV) has the typical characteristics of complex aerodynamic shape and propulsion system coupling, and the flight environment is highly complicated and intensely changeable. So its model has large uncertainty, which makes the nominal system quite different from the real system. Therefore, studying the influences caused by the uncertainties on the stability of the control system is of great significance for the controller design. In order to improve the performance of RLV, this paper proposes the approach of analyzing the influence of the model uncertainties. According to the typical RLV, the coupling dynamic and kinematics models are built. Then different factors that cause uncertainties during building the model are analyzed and summed up. After that, the model uncertainties are expressed according to the additive uncertainty model. Choosing the uncertainties matrix's maximum singular values as the boundary model, and selecting the uncertainties matrix's norm to show t how much the uncertainty factors influence is on the stability of the control system . The simulation results illustrate that the inertial factors have the largest influence on the stability of the system, and it is necessary and important to take the model uncertainties into consideration before the designing the controller of this kind of aircraft( like RLV, etc).
NASA Astrophysics Data System (ADS)
Jough, Fooad Karimi Ghaleh; Şensoy, Serhan
2016-12-01
Different performance levels may be obtained for sideway collapse evaluation of steel moment frames depending on the evaluation procedure used to handle uncertainties. In this article, the process of representing modelling uncertainties, record to record (RTR) variations and cognitive uncertainties for moment resisting steel frames of various heights is discussed in detail. RTR uncertainty is used by incremental dynamic analysis (IDA), modelling uncertainties are considered through backbone curves and hysteresis loops of component, and cognitive uncertainty is presented in three levels of material quality. IDA is used to evaluate RTR uncertainty based on strong ground motion records selected by the k-means algorithm, which is favoured over Monte Carlo selection due to its time saving appeal. Analytical equations of the Response Surface Method are obtained through IDA results by the Cuckoo algorithm, which predicts the mean and standard deviation of the collapse fragility curve. The Takagi-Sugeno-Kang model is used to represent material quality based on the response surface coefficients. Finally, collapse fragility curves with the various sources of uncertainties mentioned are derived through a large number of material quality values and meta variables inferred by the Takagi-Sugeno-Kang fuzzy model based on response surface method coefficients. It is concluded that a better risk management strategy in countries where material quality control is weak, is to account for cognitive uncertainties in fragility curves and the mean annual frequency.
UNCERTAINTY ANALYSIS IN WATER QUALITY MODELING USING QUAL2E
A strategy for incorporating uncertainty analysis techniques (sensitivity analysis, first order error analysis, and Monte Carlo simulation) into the mathematical water quality model QUAL2E is described. The model, named QUAL2E-UNCAS, automatically selects the input variables or p...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srinivasan, Sanjay
2014-09-30
In-depth understanding of the long-term fate of CO₂ in the subsurface requires study and analysis of the reservoir formation, the overlaying caprock formation, and adjacent faults. Because there is significant uncertainty in predicting the location and extent of geologic heterogeneity that can impact the future migration of CO₂ in the subsurface, there is a need to develop algorithms that can reliably quantify this uncertainty in plume migration. This project is focused on the development of a model selection algorithm that refines an initial suite of subsurface models representing the prior uncertainty to create a posterior set of subsurface models thatmore » reflect injection performance consistent with that observed. Such posterior models can be used to represent uncertainty in the future migration of the CO₂ plume. Because only injection data is required, the method provides a very inexpensive method to map the migration of the plume and the associated uncertainty in migration paths. The model selection method developed as part of this project mainly consists of assessing the connectivity/dynamic characteristics of a large prior ensemble of models, grouping the models on the basis of their expected dynamic response, selecting the subgroup of models that most closely yield dynamic response closest to the observed dynamic data, and finally quantifying the uncertainty in plume migration using the selected subset of models. The main accomplishment of the project is the development of a software module within the SGEMS earth modeling software package that implements the model selection methodology. This software module was subsequently applied to analyze CO₂ plume migration in two field projects – the In Salah CO₂ Injection project in Algeria and CO₂ injection into the Utsira formation in Norway. These applications of the software revealed that the proxies developed in this project for quickly assessing the dynamic characteristics of the reservoir were highly efficient and yielded accurate grouping of reservoir models. The plume migration paths probabilistically assessed by the method were confirmed by field observations and auxiliary data. The report also documents the application of the software to answer practical questions such as the optimum location of monitoring wells to reliably assess the migration of CO₂ plume, the effect of CO₂-rock interactions on plume migration and the ability to detect the plume under those conditions and the effect of a slow, unresolved leak on the predictions of plume migration.« less
Neudecker, D.; Talou, P.; Kawano, T.; ...
2015-08-01
We present evaluations of the prompt fission neutron spectrum (PFNS) of ²³⁹Pu induced by 500 keV neutrons, and associated covariances. In a previous evaluation by Talou et al. 2010, surprisingly low evaluated uncertainties were obtained, partly due to simplifying assumptions in the quantification of uncertainties from experiment and model. Therefore, special emphasis is placed here on a thorough uncertainty quantification of experimental data and of the Los Alamos model predicted values entering the evaluation. In addition, the Los Alamos model was extended and an evaluation technique was employed that takes into account the qualitative differences between normalized model predicted valuesmore » and experimental shape data. These improvements lead to changes in the evaluated PFNS and overall larger evaluated uncertainties than in the previous work. However, these evaluated uncertainties are still smaller than those obtained in a statistical analysis using experimental information only, due to strong model correlations. Hence, suggestions to estimate model defect uncertainties are presented, which lead to more reasonable evaluated uncertainties. The calculated k eff of selected criticality benchmarks obtained with these new evaluations agree with each other within their uncertainties despite the different approaches to estimate model defect uncertainties. The k eff one standard deviations overlap with some of those obtained using ENDF/B-VII.1, albeit their mean values are further away from unity. Spectral indexes for the Jezebel critical assembly calculated with the newly evaluated PFNS agree with the experimental data for selected (n,γ) and (n,f) reactions, and show improvements for high-energy threshold (n,2n) reactions compared to ENDF/B-VII.1.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neudecker, D.; Talou, P.; Kawano, T.
2015-08-01
We present evaluations of the prompt fission neutron spectrum (PFNS) of (PU)-P-239 induced by 500 keV neutrons, and associated covariances. In a previous evaluation by Talon et al. (2010), surprisingly low evaluated uncertainties were obtained, partly due to simplifying assumptions in the quantification of uncertainties from experiment and model. Therefore, special emphasis is placed here on a thorough uncertainty quantification of experimental data and of the Los Alamos model predicted values entering the evaluation. In addition, the Los Alamos model was extended and an evaluation technique was employed that takes into account the qualitative differences between normalized model predicted valuesmore » and experimental shape data These improvements lead to changes in the evaluated PENS and overall larger evaluated uncertainties than in the previous work. However, these evaluated uncertainties are still smaller than those obtained in a statistical analysis using experimental information only, due to strong model correlations. Hence, suggestions to estimate model defect uncertainties are presented. which lead to more reasonable evaluated uncertainties. The calculated k(eff) of selected criticality benchmarks obtained with these new evaluations agree with each other within their uncertainties despite the different approaches to estimate model defect uncertainties. The k(eff) one standard deviations overlap with some of those obtained using ENDF/B-VILl, albeit their mean values are further away from unity. Spectral indexes for the Jezebel critical assembly calculated with the newly evaluated PFNS agree with the experimental data for selected (n,) and (n,f) reactions, and show improvements for highenergy threshold (n,2n) reactions compared to ENDF/B-VII.l. (C) 2015 Elsevier B.V. All rights reserved.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knudsen, J.K.; Smith, C.L.
The steps involved to incorporate parameter uncertainty into the Nuclear Regulatory Commission (NRC) accident sequence precursor (ASP) models is covered in this paper. Three different uncertainty distributions (i.e., lognormal, beta, gamma) were evaluated to Determine the most appropriate distribution. From the evaluation, it was Determined that the lognormal distribution will be used for the ASP models uncertainty parameters. Selection of the uncertainty parameters for the basic events is also discussed. This paper covers the process of determining uncertainty parameters for the supercomponent basic events (i.e., basic events that are comprised of more than one component which can have more thanmore » one failure mode) that are utilized in the ASP models. Once this is completed, the ASP model is ready to be utilized to propagate parameter uncertainty for event assessments.« less
Scott, Finlay; Jardim, Ernesto; Millar, Colin P; Cerviño, Santiago
2016-01-01
Estimating fish stock status is very challenging given the many sources and high levels of uncertainty surrounding the biological processes (e.g. natural variability in the demographic rates), model selection (e.g. choosing growth or stock assessment models) and parameter estimation. Incorporating multiple sources of uncertainty in a stock assessment allows advice to better account for the risks associated with proposed management options, promoting decisions that are more robust to such uncertainty. However, a typical assessment only reports the model fit and variance of estimated parameters, thereby underreporting the overall uncertainty. Additionally, although multiple candidate models may be considered, only one is selected as the 'best' result, effectively rejecting the plausible assumptions behind the other models. We present an applied framework to integrate multiple sources of uncertainty in the stock assessment process. The first step is the generation and conditioning of a suite of stock assessment models that contain different assumptions about the stock and the fishery. The second step is the estimation of parameters, including fitting of the stock assessment models. The final step integrates across all of the results to reconcile the multi-model outcome. The framework is flexible enough to be tailored to particular stocks and fisheries and can draw on information from multiple sources to implement a broad variety of assumptions, making it applicable to stocks with varying levels of data availability The Iberian hake stock in International Council for the Exploration of the Sea (ICES) Divisions VIIIc and IXa is used to demonstrate the framework, starting from length-based stock and indices data. Process and model uncertainty are considered through the growth, natural mortality, fishing mortality, survey catchability and stock-recruitment relationship. Estimation uncertainty is included as part of the fitting process. Simple model averaging is used to integrate across the results and produce a single assessment that considers the multiple sources of uncertainty.
NASA Astrophysics Data System (ADS)
Seo, Jongmin; Schiavazzi, Daniele; Marsden, Alison
2017-11-01
Cardiovascular simulations are increasingly used in clinical decision making, surgical planning, and disease diagnostics. Patient-specific modeling and simulation typically proceeds through a pipeline from anatomic model construction using medical image data to blood flow simulation and analysis. To provide confidence intervals on simulation predictions, we use an uncertainty quantification (UQ) framework to analyze the effects of numerous uncertainties that stem from clinical data acquisition, modeling, material properties, and boundary condition selection. However, UQ poses a computational challenge requiring multiple evaluations of the Navier-Stokes equations in complex 3-D models. To achieve efficiency in UQ problems with many function evaluations, we implement and compare a range of iterative linear solver and preconditioning techniques in our flow solver. We then discuss applications to patient-specific cardiovascular simulation and how the problem/boundary condition formulation in the solver affects the selection of the most efficient linear solver. Finally, we discuss performance improvements in the context of uncertainty propagation. Support from National Institute of Health (R01 EB018302) is greatly appreciated.
Flassig, Robert J; Migal, Iryna; der Zalm, Esther van; Rihko-Struckmann, Liisa; Sundmacher, Kai
2015-01-16
Understanding the dynamics of biological processes can substantially be supported by computational models in the form of nonlinear ordinary differential equations (ODE). Typically, this model class contains many unknown parameters, which are estimated from inadequate and noisy data. Depending on the ODE structure, predictions based on unmeasured states and associated parameters are highly uncertain, even undetermined. For given data, profile likelihood analysis has been proven to be one of the most practically relevant approaches for analyzing the identifiability of an ODE structure, and thus model predictions. In case of highly uncertain or non-identifiable parameters, rational experimental design based on various approaches has shown to significantly reduce parameter uncertainties with minimal amount of effort. In this work we illustrate how to use profile likelihood samples for quantifying the individual contribution of parameter uncertainty to prediction uncertainty. For the uncertainty quantification we introduce the profile likelihood sensitivity (PLS) index. Additionally, for the case of several uncertain parameters, we introduce the PLS entropy to quantify individual contributions to the overall prediction uncertainty. We show how to use these two criteria as an experimental design objective for selecting new, informative readouts in combination with intervention site identification. The characteristics of the proposed multi-criterion objective are illustrated with an in silico example. We further illustrate how an existing practically non-identifiable model for the chlorophyll fluorescence induction in a photosynthetic organism, D. salina, can be rendered identifiable by additional experiments with new readouts. Having data and profile likelihood samples at hand, the here proposed uncertainty quantification based on prediction samples from the profile likelihood provides a simple way for determining individual contributions of parameter uncertainties to uncertainties in model predictions. The uncertainty quantification of specific model predictions allows identifying regions, where model predictions have to be considered with care. Such uncertain regions can be used for a rational experimental design to render initially highly uncertain model predictions into certainty. Finally, our uncertainty quantification directly accounts for parameter interdependencies and parameter sensitivities of the specific prediction.
Climate Change Impact Study with CMIP5 and Comparison with CMIP3
NASA Astrophysics Data System (ADS)
Wang, J.; Yin, H.; Reyes, E.; Chung, F. I.
2016-12-01
One of significant uncertainties in climate change impact study is the selection of climate model projection including the choosing of greenhouse gas emission scenarios. With the new generation of climate model projection, CMIP5, coming into use, CCTAG selected 11 climate models and two RCPs (rcp4.5 and rcp8.5) for California. Previous DWR climate change study was based on 6 CMIP3 climate models and two emission scenarios (SRES A2 and B1) which were selected by CAT. It is an unanswered question that how the selection of these climate model projections and emission scenarios affect the assessment of climate change impact on future water supply of California CVP/SWP project. This work will run the water planning model CalSim in DWR with 44 CMIP5 and 12 CMIP3 climate model projections to investigate the sensitivity of climate model impact study on future water supply in the CVP/SWP region to the section of climate model projection. It was found that in 2060 CMIP5 projects the wetting trend in Northern California while CMIP3 projects the drying trend in the entire California on the average. And CMIP5 projects about half-degree more warming than CMIP3. As a result, Sacramento River rim inflow increases by 8% for CMIP5 and reduces by 3% for CMIP3. In spite of this difference in rim inflow, north of Delta carryover storage will be reduced both under CMIP5 (14%) and under CMIP3 (23%) in 2060. And south Delta export will be reduced both for CMIP5 (8%) and for CMIP3 (15%). Thus, The CC impact uncertainty caused by the selection of climate model projection (CMIP3 vs CMIP5) is about 7% in terms of Delta export and about 9% in terms of north of Delta carryover storage. This uncertainty is more than the one caused by the selection of sea level rise in that the climate change impact uncertainty caused by the selection of sea level rise (Zero vs 1.5ft SLR) is about 5% in terms of Delta export and about 4-5% in terms of North of Delta carryover storage.
Optimal Wind Power Uncertainty Intervals for Electricity Market Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Ying; Zhou, Zhi; Botterud, Audun
It is important to select an appropriate uncertainty level of the wind power forecast for power system scheduling and electricity market operation. Traditional methods hedge against a predefined level of wind power uncertainty, such as a specific confidence interval or uncertainty set, which leaves the questions of how to best select the appropriate uncertainty levels. To bridge this gap, this paper proposes a model to optimize the forecast uncertainty intervals of wind power for power system scheduling problems, with the aim of achieving the best trade-off between economics and reliability. Then we reformulate and linearize the models into a mixedmore » integer linear programming (MILP) without strong assumptions on the shape of the probability distribution. In order to invest the impacts on cost, reliability, and prices in a electricity market, we apply the proposed model on a twosettlement electricity market based on a six-bus test system and on a power system representing the U.S. state of Illinois. The results show that the proposed method can not only help to balance the economics and reliability of the power system scheduling, but also help to stabilize the energy prices in electricity market operation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Mattie, Patrick D.; Bixler, Nathan E.
2014-02-01
This paper describes the knowledge advancements from the uncertainty analysis for the State-of- the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout accident scenario at the Peach Bottom Atomic Power Station. This work assessed key MELCOR and MELCOR Accident Consequence Code System, Version 2 (MACCS2) modeling uncertainties in an integrated fashion to quantify the relative importance of each uncertain input on potential accident progression, radiological releases, and off-site consequences. This quantitative uncertainty analysis provides measures of the effects on consequences, of each of the selected uncertain parameters both individually and in interaction with other parameters. The results measure the modelmore » response (e.g., variance in the output) to uncertainty in the selected input. Investigation into the important uncertain parameters in turn yields insights into important phenomena for accident progression and off-site consequences. This uncertainty analysis confirmed the known importance of some parameters, such as failure rate of the Safety Relief Valve in accident progression modeling and the dry deposition velocity in off-site consequence modeling. The analysis also revealed some new insights, such as dependent effect of cesium chemical form for different accident progressions. (auth)« less
Optimal test selection for prediction uncertainty reduction
Mullins, Joshua; Mahadevan, Sankaran; Urbina, Angel
2016-12-02
Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecisemore » data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.« less
Guilhaumon, François; Gimenez, Olivier; Gaston, Kevin J.; Mouillot, David
2008-01-01
Species-area relationships (SARs) are fundamental to the study of key and high-profile issues in conservation biology and are particularly widely used in establishing the broad patterns of biodiversity that underpin approaches to determining priority areas for biological conservation. Classically, the SAR has been argued in general to conform to a power-law relationship, and this form has been widely assumed in most applications in the field of conservation biology. Here, using nonlinear regressions within an information theoretical model selection framework, we included uncertainty regarding both model selection and parameter estimation in SAR modeling and conducted a global-scale analysis of the form of SARs for vascular plants and major vertebrate groups across 792 terrestrial ecoregions representing almost 97% of Earth's inhabited land. The results revealed a high level of uncertainty in model selection across biomes and taxa, and that the power-law model is clearly the most appropriate in only a minority of cases. Incorporating this uncertainty into a hotspots analysis using multimodel SARs led to the identification of a dramatically different set of global richness hotspots than when the power-law SAR was assumed. Our findings suggest that the results of analyses that assume a power-law model may be at severe odds with real ecological patterns, raising significant concerns for conservation priority-setting schemes and biogeographical studies. PMID:18832179
Prediction uncertainty and optimal experimental design for learning dynamical systems.
Letham, Benjamin; Letham, Portia A; Rudin, Cynthia; Browne, Edward P
2016-06-01
Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.
Convergence in parameters and predictions using computational experimental design.
Hagen, David R; White, Jacob K; Tidor, Bruce
2013-08-06
Typically, biological models fitted to experimental data suffer from significant parameter uncertainty, which can lead to inaccurate or uncertain predictions. One school of thought holds that accurate estimation of the true parameters of a biological system is inherently problematic. Recent work, however, suggests that optimal experimental design techniques can select sets of experiments whose members probe complementary aspects of a biochemical network that together can account for its full behaviour. Here, we implemented an experimental design approach for selecting sets of experiments that constrain parameter uncertainty. We demonstrated with a model of the epidermal growth factor-nerve growth factor pathway that, after synthetically performing a handful of optimal experiments, the uncertainty in all 48 parameters converged below 10 per cent. Furthermore, the fitted parameters converged to their true values with a small error consistent with the residual uncertainty. When untested experimental conditions were simulated with the fitted models, the predicted species concentrations converged to their true values with errors that were consistent with the residual uncertainty. This paper suggests that accurate parameter estimation is achievable with complementary experiments specifically designed for the task, and that the resulting parametrized models are capable of accurate predictions.
Two-Stage Modeling of Formaldehyde-Induced Tumor Incidence in the Rat—analysis of Uncertainties
This works extends the 2-stage cancer modeling of tumor incidence in formaldehyde-exposed rats carried out at the CIIT Centers for Health Research. We modify key assumptions, evaluate the effect of selected uncertainties, and develop confidence bounds on parameter estimates. Th...
NASA Astrophysics Data System (ADS)
Wöhling, T.; Schöniger, A.; Geiges, A.; Nowak, W.; Gayler, S.
2013-12-01
The objective selection of appropriate models for realistic simulations of coupled soil-plant processes is a challenging task since the processes are complex, not fully understood at larger scales, and highly non-linear. Also, comprehensive data sets are scarce, and measurements are uncertain. In the past decades, a variety of different models have been developed that exhibit a wide range of complexity regarding their approximation of processes in the coupled model compartments. We present a method for evaluating experimental design for maximum confidence in the model selection task. The method considers uncertainty in parameters, measurements and model structures. Advancing the ideas behind Bayesian Model Averaging (BMA), we analyze the changes in posterior model weights and posterior model choice uncertainty when more data are made available. This allows assessing the power of different data types, data densities and data locations in identifying the best model structure from among a suite of plausible models. The models considered in this study are the crop models CERES, SUCROS, GECROS and SPASS, which are coupled to identical routines for simulating soil processes within the modelling framework Expert-N. The four models considerably differ in the degree of detail at which crop growth and root water uptake are represented. Monte-Carlo simulations were conducted for each of these models considering their uncertainty in soil hydraulic properties and selected crop model parameters. Using a Bootstrap Filter (BF), the models were then conditioned on field measurements of soil moisture, matric potential, leaf-area index, and evapotranspiration rates (from eddy-covariance measurements) during a vegetation period of winter wheat at a field site at the Swabian Alb in Southwestern Germany. Following our new method, we derived model weights when using all data or different subsets thereof. We discuss to which degree the posterior mean outperforms the prior mean and all individual posterior models, how informative the data types were for reducing prediction uncertainty of evapotranspiration and deep drainage, and how well the model structure can be identified based on the different data types and subsets. We further analyze the impact of measurement uncertainty und systematic model errors on the effective sample size of the BF and the resulting model weights.
Trends and uncertainties in budburst projections of Norway spruce in Northern Europe.
Olsson, Cecilia; Olin, Stefan; Lindström, Johan; Jönsson, Anna Maria
2017-12-01
Budburst is regulated by temperature conditions, and a warming climate is associated with earlier budburst. A range of phenology models has been developed to assess climate change effects, and they tend to produce different results. This is mainly caused by different model representations of tree physiology processes, selection of observational data for model parameterization, and selection of climate model data to generate future projections. In this study, we applied (i) Bayesian inference to estimate model parameter values to address uncertainties associated with selection of observational data, (ii) selection of climate model data representative of a larger dataset, and (iii) ensembles modeling over multiple initial conditions, model classes, model parameterizations, and boundary conditions to generate future projections and uncertainty estimates. The ensemble projection indicated that the budburst of Norway spruce in northern Europe will on average take place 10.2 ± 3.7 days earlier in 2051-2080 than in 1971-2000, given climate conditions corresponding to RCP 8.5. Three provenances were assessed separately (one early and two late), and the projections indicated that the relationship among provenance will remain also in a warmer climate. Structurally complex models were more likely to fail predicting budburst for some combinations of site and year than simple models. However, they contributed to the overall picture of current understanding of climate impacts on tree phenology by capturing additional aspects of temperature response, for example, chilling. Model parameterizations based on single sites were more likely to result in model failure than parameterizations based on multiple sites, highlighting that the model parameterization is sensitive to initial conditions and may not perform well under other climate conditions, whether the change is due to a shift in space or over time. By addressing a range of uncertainties, this study showed that ensemble modeling provides a more robust impact assessment than would a single phenology model run.
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Harp, D.
2010-12-01
The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.
Analyzing The Uncertainty Of Diameter Growth Model Predictions
Ronald E. McRoberts; Veronica C. Lessard; Margaret R. Holdaway
1999-01-01
The North Central Research Station of the USDA Forest Service is developing a new set of individual tree, diameter growth models to be used as a component of an annual forest inventory system. The criterion for selection of predictor variables for these models is the uncertainty in 5-, 10-, and 20-year diameter growth predictions estimated using Monte Carlo simulations...
NASA Astrophysics Data System (ADS)
Pakyuz-Charrier, Evren; Lindsay, Mark; Ogarko, Vitaliy; Giraud, Jeremie; Jessell, Mark
2018-04-01
Three-dimensional (3-D) geological structural modeling aims to determine geological information in a 3-D space using structural data (foliations and interfaces) and topological rules as inputs. This is necessary in any project in which the properties of the subsurface matters; they express our understanding of geometries in depth. For that reason, 3-D geological models have a wide range of practical applications including but not restricted to civil engineering, the oil and gas industry, the mining industry, and water management. These models, however, are fraught with uncertainties originating from the inherent flaws of the modeling engines (working hypotheses, interpolator's parameterization) and the inherent lack of knowledge in areas where there are no observations combined with input uncertainty (observational, conceptual and technical errors). Because 3-D geological models are often used for impactful decision-making it is critical that all 3-D geological models provide accurate estimates of uncertainty. This paper's focus is set on the effect of structural input data measurement uncertainty propagation in implicit 3-D geological modeling. This aim is achieved using Monte Carlo simulation for uncertainty estimation (MCUE), a stochastic method which samples from predefined disturbance probability distributions that represent the uncertainty of the original input data set. MCUE is used to produce hundreds to thousands of altered unique data sets. The altered data sets are used as inputs to produce a range of plausible 3-D models. The plausible models are then combined into a single probabilistic model as a means to propagate uncertainty from the input data to the final model. In this paper, several improved methods for MCUE are proposed. The methods pertain to distribution selection for input uncertainty, sample analysis and statistical consistency of the sampled distribution. Pole vector sampling is proposed as a more rigorous alternative than dip vector sampling for planar features and the use of a Bayesian approach to disturbance distribution parameterization is suggested. The influence of incorrect disturbance distributions is discussed and propositions are made and evaluated on synthetic and realistic cases to address the sighted issues. The distribution of the errors of the observed data (i.e., scedasticity) is shown to affect the quality of prior distributions for MCUE. Results demonstrate that the proposed workflows improve the reliability of uncertainty estimation and diminish the occurrence of artifacts.
FORECAST MODEL FOR MODERATE EARTHQUAKES NEAR PARKFIELD, CALIFORNIA.
Stuart, William D.; Archuleta, Ralph J.; Lindh, Allan G.
1985-01-01
The paper outlines a procedure for using an earthquake instability model and repeated geodetic measurements to attempt an earthquake forecast. The procedure differs from other prediction methods, such as recognizing trends in data or assuming failure at a critical stress level, by using a self-contained instability model that simulates both preseismic and coseismic faulting in a natural way. In short, physical theory supplies a family of curves, and the field data select the member curves whose continuation into the future constitutes a prediction. Model inaccuracy and resolving power of the data determine the uncertainty of the selected curves and hence the uncertainty of the earthquake time.
Why Quantify Uncertainty in Ecosystem Studies: Obligation versus Discovery Tool?
NASA Astrophysics Data System (ADS)
Harmon, M. E.
2016-12-01
There are multiple motivations for quantifying uncertainty in ecosystem studies. One is as an obligation; the other is as a tool useful in moving ecosystem science toward discovery. While reporting uncertainty should become a routine expectation, a more convincing motivation involves discovery. By clarifying what is known and to what degree it is known, uncertainty analyses can point the way toward improvements in measurements, sampling designs, and models. While some of these improvements (e.g., better sampling designs) may lead to incremental gains, those involving models (particularly model selection) may require large gains in knowledge. To be fully harnessed as a discovery tool, attitudes toward uncertainty may have to change: rather than viewing uncertainty as a negative assessment of what was done, it should be viewed as positive, helpful assessment of what remains to be done.
Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition
Fraley, Chris; Percival, Daniel
2014-01-01
Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001
Application of Bayesian model averaging to measurements of the primordial power spectrum
NASA Astrophysics Data System (ADS)
Parkinson, David; Liddle, Andrew R.
2010-11-01
Cosmological parameter uncertainties are often stated assuming a particular model, neglecting the model uncertainty, even when Bayesian model selection is unable to identify a conclusive best model. Bayesian model averaging is a method for assessing parameter uncertainties in situations where there is also uncertainty in the underlying model. We apply model averaging to the estimation of the parameters associated with the primordial power spectra of curvature and tensor perturbations. We use CosmoNest and MultiNest to compute the model evidences and posteriors, using cosmic microwave data from WMAP, ACBAR, BOOMERanG, and CBI, plus large-scale structure data from the SDSS DR7. We find that the model-averaged 95% credible interval for the spectral index using all of the data is 0.940
Mohsenizadeh, Daniel N; Dehghannasiri, Roozbeh; Dougherty, Edward R
2018-01-01
In systems biology, network models are often used to study interactions among cellular components, a salient aim being to develop drugs and therapeutic mechanisms to change the dynamical behavior of the network to avoid undesirable phenotypes. Owing to limited knowledge, model uncertainty is commonplace and network dynamics can be updated in different ways, thereby giving multiple dynamic trajectories, that is, dynamics uncertainty. In this manuscript, we propose an experimental design method that can effectively reduce the dynamics uncertainty and improve performance in an interaction-based network. Both dynamics uncertainty and experimental error are quantified with respect to the modeling objective, herein, therapeutic intervention. The aim of experimental design is to select among a set of candidate experiments the experiment whose outcome, when applied to the network model, maximally reduces the dynamics uncertainty pertinent to the intervention objective.
Quantitative Rheological Model Selection
NASA Astrophysics Data System (ADS)
Freund, Jonathan; Ewoldt, Randy
2014-11-01
The more parameters in a rheological the better it will reproduce available data, though this does not mean that it is necessarily a better justified model. Good fits are only part of model selection. We employ a Bayesian inference approach that quantifies model suitability by balancing closeness to data against both the number of model parameters and their a priori uncertainty. The penalty depends upon prior-to-calibration expectation of the viable range of values that model parameters might take, which we discuss as an essential aspect of the selection criterion. Models that are physically grounded are usually accompanied by tighter physical constraints on their respective parameters. The analysis reflects a basic principle: models grounded in physics can be expected to enjoy greater generality and perform better away from where they are calibrated. In contrast, purely empirical models can provide comparable fits, but the model selection framework penalizes their a priori uncertainty. We demonstrate the approach by selecting the best-justified number of modes in a Multi-mode Maxwell description of PVA-Borax. We also quantify relative merits of the Maxwell model relative to powerlaw fits and purely empirical fits for PVA-Borax, a viscoelastic liquid, and gluten.
Roberts, Steven; Martin, Michael A
2010-01-01
Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.
Kim, Steven B; Kodell, Ralph L; Moon, Hojin
2014-03-01
In chemical and microbial risk assessments, risk assessors fit dose-response models to high-dose data and extrapolate downward to risk levels in the range of 1-10%. Although multiple dose-response models may be able to fit the data adequately in the experimental range, the estimated effective dose (ED) corresponding to an extremely small risk can be substantially different from model to model. In this respect, model averaging (MA) provides more robustness than a single dose-response model in the point and interval estimation of an ED. In MA, accounting for both data uncertainty and model uncertainty is crucial, but addressing model uncertainty is not achieved simply by increasing the number of models in a model space. A plausible set of models for MA can be characterized by goodness of fit and diversity surrounding the truth. We propose a diversity index (DI) to balance between these two characteristics in model space selection. It addresses a collective property of a model space rather than individual performance of each model. Tuning parameters in the DI control the size of the model space for MA. © 2013 Society for Risk Analysis.
Mean-variance model for portfolio optimization with background risk based on uncertainty theory
NASA Astrophysics Data System (ADS)
Zhai, Jia; Bai, Manying
2018-04-01
The aim of this paper is to develop a mean-variance model for portfolio optimization considering the background risk, liquidity and transaction cost based on uncertainty theory. In portfolio selection problem, returns of securities and assets liquidity are assumed as uncertain variables because of incidents or lacking of historical data, which are common in economic and social environment. We provide crisp forms of the model and a hybrid intelligent algorithm to solve it. Under a mean-variance framework, we analyze the portfolio frontier characteristic considering independently additive background risk. In addition, we discuss some effects of background risk and liquidity constraint on the portfolio selection. Finally, we demonstrate the proposed models by numerical simulations.
Mitigating Provider Uncertainty in Service Provision Contracts
NASA Astrophysics Data System (ADS)
Smith, Chris; van Moorsel, Aad
Uncertainty is an inherent property of open, distributed and multiparty systems. The viability of the mutually beneficial relationships which motivate these systems relies on rational decision-making by each constituent party under uncertainty. Service provision in distributed systems is one such relationship. Uncertainty is experienced by the service provider in his ability to deliver a service with selected quality level guarantees due to inherent non-determinism, such as load fluctuations and hardware failures. Statistical estimators utilized to model this non-determinism introduce additional uncertainty through sampling error. Inability of the provider to accurately model and analyze uncertainty in the quality level guarantees can result in the formation of sub-optimal service provision contracts. Emblematic consequences include loss of revenue, inefficient resource utilization and erosion of reputation and consumer trust. We propose a utility model for contract-based service provision to provide a systematic approach to optimal service provision contract formation under uncertainty. Performance prediction methods to enable the derivation of statistical estimators for quality level are introduced, with analysis of their resultant accuracy and cost.
Decay heat uncertainty for BWR used fuel due to modeling and nuclear data uncertainties
Ilas, Germina; Liljenfeldt, Henrik
2017-05-19
Characterization of the energy released from radionuclide decay in nuclear fuel discharged from reactors is essential for the design, safety, and licensing analyses of used nuclear fuel storage, transportation, and repository systems. There are a limited number of decay heat measurements available for commercial used fuel applications. Because decay heat measurements can be expensive or impractical for covering the multitude of existing fuel designs, operating conditions, and specific application purposes, decay heat estimation relies heavily on computer code prediction. Uncertainty evaluation for calculated decay heat is an important aspect when assessing code prediction and a key factor supporting decision makingmore » for used fuel applications. While previous studies have largely focused on uncertainties in code predictions due to nuclear data uncertainties, this study discusses uncertainties in calculated decay heat due to uncertainties in assembly modeling parameters as well as in nuclear data. Capabilities in the SCALE nuclear analysis code system were used to quantify the effect on calculated decay heat of uncertainties in nuclear data and selected manufacturing and operation parameters for a typical boiling water reactor (BWR) fuel assembly. Furthermore, the BWR fuel assembly used as the reference case for this study was selected from a set of assemblies for which high-quality decay heat measurements are available, to assess the significance of the results through comparison with calculated and measured decay heat data.« less
Decay heat uncertainty for BWR used fuel due to modeling and nuclear data uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilas, Germina; Liljenfeldt, Henrik
Characterization of the energy released from radionuclide decay in nuclear fuel discharged from reactors is essential for the design, safety, and licensing analyses of used nuclear fuel storage, transportation, and repository systems. There are a limited number of decay heat measurements available for commercial used fuel applications. Because decay heat measurements can be expensive or impractical for covering the multitude of existing fuel designs, operating conditions, and specific application purposes, decay heat estimation relies heavily on computer code prediction. Uncertainty evaluation for calculated decay heat is an important aspect when assessing code prediction and a key factor supporting decision makingmore » for used fuel applications. While previous studies have largely focused on uncertainties in code predictions due to nuclear data uncertainties, this study discusses uncertainties in calculated decay heat due to uncertainties in assembly modeling parameters as well as in nuclear data. Capabilities in the SCALE nuclear analysis code system were used to quantify the effect on calculated decay heat of uncertainties in nuclear data and selected manufacturing and operation parameters for a typical boiling water reactor (BWR) fuel assembly. Furthermore, the BWR fuel assembly used as the reference case for this study was selected from a set of assemblies for which high-quality decay heat measurements are available, to assess the significance of the results through comparison with calculated and measured decay heat data.« less
'spup' - an R package for uncertainty propagation in spatial environmental modelling
NASA Astrophysics Data System (ADS)
Sawicka, Kasia; Heuvelink, Gerard
2016-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected static and interactive visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.
'spup' - an R package for uncertainty propagation analysis in spatial environmental modelling
NASA Astrophysics Data System (ADS)
Sawicka, Kasia; Heuvelink, Gerard
2017-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability and being able to deal with case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.
Diagnosing Model Errors in Simulations of Solar Radiation on Inclined Surfaces: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Yu; Sengupta, Manajit
2016-06-01
Transposition models have been widely used in the solar energy industry to simulate solar radiation on inclined PV panels. Following numerous studies comparing the performance of transposition models, this paper aims to understand the quantitative uncertainty in the state-of-the-art transposition models and the sources leading to the uncertainty. Our results suggest that an isotropic transposition model developed by Badescu substantially underestimates diffuse plane-of-array (POA) irradiances when diffuse radiation is perfectly isotropic. In the empirical transposition models, the selection of empirical coefficients and land surface albedo can both result in uncertainty in the output. This study can be used as amore » guide for future development of physics-based transposition models.« less
Diagnosing Model Errors in Simulation of Solar Radiation on Inclined Surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Yu; Sengupta, Manajit
2016-11-21
Transposition models have been widely used in the solar energy industry to simulate solar radiation on inclined PV panels. Following numerous studies comparing the performance of transposition models, this paper aims to understand the quantitative uncertainty in the state-of-the-art transposition models and the sources leading to the uncertainty. Our results show significant differences between two highly used isotropic transposition models with one substantially underestimating the diffuse plane-of-array (POA) irradiances when diffuse radiation is perfectly isotropic. In the empirical transposition models, the selection of empirical coefficients and land surface albedo can both result in uncertainty in the output. This study canmore » be used as a guide for future development of physics-based transposition models.« less
Evaluating data worth for ground-water management under uncertainty
Wagner, B.J.
1999-01-01
A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models-a chance-constrained ground-water management model and an integer-programing sampling network design model-to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information-i.e., the projected reduction in management costs-with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models - a chance-constrained ground-water management model and an integer-programming sampling network design model - to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information - i.e., the projected reduction in management costs - with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Elshall, A. S.; Hanor, J. S.
2012-12-01
Subsurface modeling is challenging because of many possible competing propositions for each uncertain model component. How can we judge that we are selecting the correct proposition for an uncertain model component out of numerous competing propositions? How can we bridge the gap between synthetic mental principles such as mathematical expressions on one hand, and empirical observation such as observation data on the other hand when uncertainty exists on both sides? In this study, we introduce hierarchical Bayesian model averaging (HBMA) as a multi-model (multi-proposition) framework to represent our current state of knowledge and decision for hydrogeological structure modeling. The HBMA framework allows for segregating and prioritizing different sources of uncertainty, and for comparative evaluation of competing propositions for each source of uncertainty. We applied the HBMA to a study of hydrostratigraphy and uncertainty propagation of the Southern Hills aquifer system in the Baton Rouge area, Louisiana. We used geophysical data for hydrogeological structure construction through indictor hydrostratigraphy method and used lithologic data from drillers' logs for model structure calibration. However, due to uncertainty in model data, structure and parameters, multiple possible hydrostratigraphic models were produced and calibrated. The study considered four sources of uncertainties. To evaluate mathematical structure uncertainty, the study considered three different variogram models and two geological stationarity assumptions. With respect to geological structure uncertainty, the study considered two geological structures with respect to the Denham Springs-Scotlandville fault. With respect to data uncertainty, the study considered two calibration data sets. These four sources of uncertainty with their corresponding competing modeling propositions resulted in 24 calibrated models. The results showed that by segregating different sources of uncertainty, HBMA analysis provided insights on uncertainty priorities and propagation. In addition, it assisted in evaluating the relative importance of competing modeling propositions for each uncertain model component. By being able to dissect the uncertain model components and provide weighted representation of the competing propositions for each uncertain model component based on the background knowledge, the HBMA functions as an epistemic framework for advancing knowledge about the system under study.
NASA Astrophysics Data System (ADS)
Chen, Zhuowei; Shi, Liangsheng; Ye, Ming; Zhu, Yan; Yang, Jinzhong
2018-06-01
Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. By using a new variance-based global sensitivity analysis method, this paper identifies important parameters for nitrogen reactive transport with simultaneous consideration of these three uncertainties. A combination of three scenarios of soil temperature and two scenarios of soil moisture creates a total of six scenarios. Four alternative models describing the effect of soil temperature and moisture content are used to evaluate the reduction functions used for calculating actual reaction rates. The results show that for nitrogen reactive transport problem, parameter importance varies substantially among different models and scenarios. Denitrification and nitrification process is sensitive to soil moisture content status rather than to the moisture function parameter. Nitrification process becomes more important at low moisture content and low temperature. However, the changing importance of nitrification activity with respect to temperature change highly relies on the selected model. Model-averaging is suggested to assess the nitrification (or denitrification) contribution by reducing the possible model error. Despite the introduction of biochemical heterogeneity or not, fairly consistent parameter importance rank is obtained in this study: optimal denitrification rate (Kden) is the most important parameter; reference temperature (Tr) is more important than temperature coefficient (Q10); empirical constant in moisture response function (m) is the least important one. Vertical distribution of soil moisture but not temperature plays predominant role controlling nitrogen reaction. This study provides insight into the nitrogen reactive transport modeling and demonstrates an effective strategy of selecting the important parameters when future temperature and soil moisture carry uncertainties or when modelers face with multiple ways of establishing nitrogen models.
Posada, David
2006-01-01
ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102
Selection of Representative Models for Decision Analysis Under Uncertainty
NASA Astrophysics Data System (ADS)
Meira, Luis A. A.; Coelho, Guilherme P.; Santos, Antonio Alberto S.; Schiozer, Denis J.
2016-03-01
The decision-making process in oil fields includes a step of risk analysis associated with the uncertainties present in the variables of the problem. Such uncertainties lead to hundreds, even thousands, of possible scenarios that are supposed to be analyzed so an effective production strategy can be selected. Given this high number of scenarios, a technique to reduce this set to a smaller, feasible subset of representative scenarios is imperative. The selected scenarios must be representative of the original set and also free of optimistic and pessimistic bias. This paper is devoted to propose an assisted methodology to identify representative models in oil fields. To do so, first a mathematical function was developed to model the representativeness of a subset of models with respect to the full set that characterizes the problem. Then, an optimization tool was implemented to identify the representative models of any problem, considering not only the cross-plots of the main output variables, but also the risk curves and the probability distribution of the attribute-levels of the problem. The proposed technique was applied to two benchmark cases and the results, evaluated by experts in the field, indicate that the obtained solutions are richer than those identified by previously adopted manual approaches. The program bytecode is available under request.
Plurality of Type A evaluations of uncertainty
NASA Astrophysics Data System (ADS)
Possolo, Antonio; Pintar, Adam L.
2017-10-01
The evaluations of measurement uncertainty involving the application of statistical methods to measurement data (Type A evaluations as specified in the Guide to the Expression of Uncertainty in Measurement, GUM) comprise the following three main steps: (i) developing a statistical model that captures the pattern of dispersion or variability in the experimental data, and that relates the data either to the measurand directly or to some intermediate quantity (input quantity) that the measurand depends on; (ii) selecting a procedure for data reduction that is consistent with this model and that is fit for the purpose that the results are intended to serve; (iii) producing estimates of the model parameters, or predictions based on the fitted model, and evaluations of uncertainty that qualify either those estimates or these predictions, and that are suitable for use in subsequent uncertainty propagation exercises. We illustrate these steps in uncertainty evaluations related to the measurement of the mass fraction of vanadium in a bituminous coal reference material, including the assessment of the homogeneity of the material, and to the calibration and measurement of the amount-of-substance fraction of a hydrochlorofluorocarbon in air, and of the age of a meteorite. Our goal is to expose the plurality of choices that can reasonably be made when taking each of the three steps outlined above, and to show that different choices typically lead to different estimates of the quantities of interest, and to different evaluations of the associated uncertainty. In all the examples, the several alternatives considered represent choices that comparably competent statisticians might make, but who differ in the assumptions that they are prepared to rely on, and in their selection of approach to statistical inference. They represent also alternative treatments that the same statistician might give to the same data when the results are intended for different purposes.
NASA Astrophysics Data System (ADS)
Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.
2016-12-01
The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.
NASA Astrophysics Data System (ADS)
Devendran, A. A.; Lakshmanan, G.
2014-11-01
Data quality for GIS processing and analysis is becoming an increased concern due to the accelerated application of GIS technology for problem solving and decision making roles. Uncertainty in the geographic representation of the real world arises as these representations are incomplete. Identification of the sources of these uncertainties and the ways in which they operate in GIS based representations become crucial in any spatial data representation and geospatial analysis applied to any field of application. This paper reviews the articles on the various components of spatial data quality and various uncertainties inherent in them and special focus is paid to two fields of application such as Urban Simulation and Hydrological Modelling. Urban growth is a complicated process involving the spatio-temporal changes of all socio-economic and physical components at different scales. Cellular Automata (CA) model is one of the simulation models, which randomly selects potential cells for urbanisation and the transition rules evaluate the properties of the cell and its neighbour. Uncertainty arising from CA modelling is assessed mainly using sensitivity analysis including Monte Carlo simulation method. Likewise, the importance of hydrological uncertainty analysis has been emphasized in recent years and there is an urgent need to incorporate uncertainty estimation into water resources assessment procedures. The Soil and Water Assessment Tool (SWAT) is a continuous time watershed model to evaluate various impacts of land use management and climate on hydrology and water quality. Hydrological model uncertainties using SWAT model are dealt primarily by Generalized Likelihood Uncertainty Estimation (GLUE) method.
Performance Analysis of Transposition Models Simulating Solar Radiation on Inclined Surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Yu; Sengupta, Manajit
2016-06-02
Transposition models have been widely used in the solar energy industry to simulate solar radiation on inclined photovoltaic panels. Following numerous studies comparing the performance of transposition models, this work aims to understand the quantitative uncertainty in state-of-the-art transposition models and the sources leading to the uncertainty. Our results show significant differences between two highly used isotropic transposition models, with one substantially underestimating the diffuse plane-of-array irradiances when diffuse radiation is perfectly isotropic. In the empirical transposition models, the selection of the empirical coefficients and land surface albedo can both result in uncertainty in the output. This study can bemore » used as a guide for the future development of physics-based transposition models and evaluations of system performance.« less
Uncertainty Quantification in High Throughput Screening ...
Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of biochemical and cellular processes, including endocrine disruption, cytotoxicity, and zebrafish development. Over 2.6 million concentration response curves are fit to models to extract parameters related to potency and efficacy. Models built on ToxCast results are being used to rank and prioritize the toxicological risk of tested chemicals and to predict the toxicity of tens of thousands of chemicals not yet tested in vivo. However, the data size also presents challenges. When fitting the data, the choice of models, model selection strategy, and hit call criteria must reflect the need for computational efficiency and robustness, requiring hard and somewhat arbitrary cutoffs. When coupled with unavoidable noise in the experimental concentration response data, these hard cutoffs cause uncertainty in model parameters and the hit call itself. The uncertainty will then propagate through all of the models built on the data. Left unquantified, this uncertainty makes it difficult to fully interpret the data for risk assessment. We used bootstrap resampling methods to quantify the uncertainty in fitting models to the concentration response data. Bootstrap resampling determines confidence intervals for
NASA Astrophysics Data System (ADS)
Karmalkar, A.
2017-12-01
Ensembles of dynamically downscaled climate change simulations are routinely used to capture uncertainty in projections at regional scales. I assess the reliability of two such ensembles for North America - NARCCAP and NA-CORDEX - by investigating the impact of model selection on representing uncertainty in regional projections, and the ability of the regional climate models (RCMs) to provide reliable information. These aspects - discussed for the six regions used in the US National Climate Assessment - provide an important perspective on the interpretation of downscaled results. I show that selecting general circulation models for downscaling based on their equilibrium climate sensitivities is a reasonable choice, but the six models chosen for NA-CORDEX do a poor job at representing uncertainty in winter temperature and precipitation projections in many parts of the eastern US, which lead to overconfident projections. The RCM performance is highly variable across models, regions, and seasons and the ability of the RCMs to provide improved seasonal mean performance relative to their parent GCMs seems limited in both RCM ensembles. Additionally, the ability of the RCMs to simulate historical climates is not strongly related to their ability to simulate climate change across the ensemble. This finding suggests limited use of models' historical performance to constrain their projections. Given these challenges in dynamical downscaling, the RCM results should not be used in isolation. Information on how well the RCM ensembles represent known uncertainties in regional climate change projections discussed here needs to be communicated clearly to inform maagement decisions.
A facility location model for municipal solid waste management system under uncertain environment.
Yadav, Vinay; Bhurjee, A K; Karmakar, Subhankar; Dikshit, A K
2017-12-15
In municipal solid waste management system, decision makers have to develop an insight into the processes namely, waste generation, collection, transportation, processing, and disposal methods. Many parameters (e.g., waste generation rate, functioning costs of facilities, transportation cost, and revenues) in this system are associated with uncertainties. Often, these uncertainties of parameters need to be modeled under a situation of data scarcity for generating probability distribution function or membership function for stochastic mathematical programming or fuzzy mathematical programming respectively, with only information of extreme variations. Moreover, if uncertainties are ignored, then the problems like insufficient capacities of waste management facilities or improper utilization of available funds may be raised. To tackle uncertainties of these parameters in a more efficient manner an algorithm, based on interval analysis, has been developed. This algorithm is applied to find optimal solutions for a facility location model, which is formulated to select economically best locations of transfer stations in a hypothetical urban center. Transfer stations are an integral part of contemporary municipal solid waste management systems, and economic siting of transfer stations ensures financial sustainability of this system. The model is written in a mathematical programming language AMPL with KNITRO as a solver. The developed model selects five economically best locations out of ten potential locations with an optimum overall cost of [394,836, 757,440] Rs. 1 /day ([5906, 11,331] USD/day) approximately. Further, the requirement of uncertainty modeling is explained based on the results of sensitivity analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
Predicting long-range transport: a systematic evaluation of two multimedia transport models.
Bennett, D H; Scheringer, M; McKone, T E; Hungerbühler, K
2001-03-15
The United Nations Environment Program has recently developed criteria to identify and restrict chemicals with a potential for persistence and long-range transport (persistent organic pollutants or POPs). There are many stakeholders involved, and the issues are not only scientific but also include social, economic, and political factors. This work focuses on one aspect of the POPs debate, the criteria for determining the potential for long-range transport (LRT). Our goal is to determine if current models are reliable enough to support decisions that classify a chemical based on the LRT potential. We examine the robustness of two multimedia fate models for determining the relative ranking and absolute spatial range of various chemicals in the environment. We also consider the effect of parameter uncertainties and the model uncertainty associated with the selection of an algorithm for gas-particle partitioning on the model results. Given the same chemical properties, both models give virtually the same ranking. However, when chemical parameter uncertainties and model uncertainties such as particle partitioning are considered, the spatial range distributions obtained for the individual chemicals overlap, preventing a distinct rank order. The absolute values obtained for the predicted spatial range or travel distance differ significantly between the two models for the uncertainties evaluated. We find that to evaluate a chemical when large and unresolved uncertainties exist, it is more informative to use two or more models and include multiple types of uncertainty. Model differences and uncertainties must be explicitly confronted to determine how the limitations of scientific knowledge impact predictions in the decision-making process.
NASA Astrophysics Data System (ADS)
Faybishenko, B.; Flach, G. P.
2012-12-01
The objectives of this presentation are: (a) to illustrate the application of Monte Carlo and fuzzy-probabilistic approaches for uncertainty quantification (UQ) in predictions of potential evapotranspiration (PET), actual evapotranspiration (ET), and infiltration (I), using uncertain hydrological or meteorological time series data, and (b) to compare the results of these calculations with those from field measurements at the U.S. Department of Energy Savannah River Site (SRS), near Aiken, South Carolina, USA. The UQ calculations include the evaluation of aleatory (parameter uncertainty) and epistemic (model) uncertainties. The effect of aleatory uncertainty is expressed by assigning the probability distributions of input parameters, using historical monthly averaged data from the meteorological station at the SRS. The combined effect of aleatory and epistemic uncertainties on the UQ of PET, ET, and Iis then expressed by aggregating the results of calculations from multiple models using a p-box and fuzzy numbers. The uncertainty in PETis calculated using the Bair-Robertson, Blaney-Criddle, Caprio, Hargreaves-Samani, Hamon, Jensen-Haise, Linacre, Makkink, Priestly-Taylor, Penman, Penman-Monteith, Thornthwaite, and Turc models. Then, ET is calculated from the modified Budyko model, followed by calculations of I from the water balance equation. We show that probabilistic and fuzzy-probabilistic calculations using multiple models generate the PET, ET, and Idistributions, which are well within the range of field measurements. We also show that a selection of a subset of models can be used to constrain the uncertainty quantification of PET, ET, and I.
Uncertainty in BMP evaluation and optimization for watershed management
NASA Astrophysics Data System (ADS)
Chaubey, I.; Cibin, R.; Sudheer, K.; Her, Y.
2012-12-01
Use of computer simulation models have increased substantially to make watershed management decisions and to develop strategies for water quality improvements. These models are often used to evaluate potential benefits of various best management practices (BMPs) for reducing losses of pollutants from sources areas into receiving waterbodies. Similarly, use of simulation models in optimizing selection and placement of best management practices under single (maximization of crop production or minimization of pollutant transport) and multiple objective functions has increased recently. One of the limitations of the currently available assessment and optimization approaches is that the BMP strategies are considered deterministic. Uncertainties in input data (e.g. precipitation, streamflow, sediment, nutrient and pesticide losses measured, land use) and model parameters may result in considerable uncertainty in watershed response under various BMP options. We have developed and evaluated options to include uncertainty in BMP evaluation and optimization for watershed management. We have also applied these methods to evaluate uncertainty in ecosystem services from mixed land use watersheds. In this presentation, we will discuss methods to to quantify uncertainties in BMP assessment and optimization solutions due to uncertainties in model inputs and parameters. We have used a watershed model (Soil and Water Assessment Tool or SWAT) to simulate the hydrology and water quality in mixed land use watershed located in Midwest USA. The SWAT model was also used to represent various BMPs in the watershed needed to improve water quality. SWAT model parameters, land use change parameters, and climate change parameters were considered uncertain. It was observed that model parameters, land use and climate changes resulted in considerable uncertainties in BMP performance in reducing P, N, and sediment loads. In addition, climate change scenarios also affected uncertainties in SWAT simulated crop yields. Considerable uncertainties in the net cost and the water quality improvements resulted due to uncertainties in land use, climate change, and model parameter values.
Predictive uncertainty in auditory sequence processing
Hansen, Niels Chr.; Pearce, Marcus T.
2014-01-01
Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty—a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music. PMID:25295018
Mesa-Frias, Marco; Chalabi, Zaid; Foss, Anna M
2014-01-01
Quantitative health impact assessment (HIA) is increasingly being used to assess the health impacts attributable to an environmental policy or intervention. As a consequence, there is a need to assess uncertainties in the assessments because of the uncertainty in the HIA models. In this paper, a framework is developed to quantify the uncertainty in the health impacts of environmental interventions and is applied to evaluate the impacts of poor housing ventilation. The paper describes the development of the framework through three steps: (i) selecting the relevant exposure metric and quantifying the evidence of potential health effects of the exposure; (ii) estimating the size of the population affected by the exposure and selecting the associated outcome measure; (iii) quantifying the health impact and its uncertainty. The framework introduces a novel application for the propagation of uncertainty in HIA, based on fuzzy set theory. Fuzzy sets are used to propagate parametric uncertainty in a non-probabilistic space and are applied to calculate the uncertainty in the morbidity burdens associated with three indoor ventilation exposure scenarios: poor, fair and adequate. The case-study example demonstrates how the framework can be used in practice, to quantify the uncertainty in health impact assessment where there is insufficient information to carry out a probabilistic uncertainty analysis. © 2013.
The importance of hydrological uncertainty assessment methods in climate change impact studies
NASA Astrophysics Data System (ADS)
Honti, M.; Scheidegger, A.; Stamm, C.
2014-08-01
Climate change impact assessments have become more and more popular in hydrology since the middle 1980s with a recent boost after the publication of the IPCC AR4 report. From hundreds of impact studies a quasi-standard methodology has emerged, to a large extent shaped by the growing public demand for predicting how water resources management or flood protection should change in the coming decades. The "standard" workflow relies on a model cascade from global circulation model (GCM) predictions for selected IPCC scenarios to future catchment hydrology. Uncertainty is present at each level and propagates through the model cascade. There is an emerging consensus between many studies on the relative importance of the different uncertainty sources. The prevailing perception is that GCM uncertainty dominates hydrological impact studies. Our hypothesis was that the relative importance of climatic and hydrologic uncertainty is (among other factors) heavily influenced by the uncertainty assessment method. To test this we carried out a climate change impact assessment and estimated the relative importance of the uncertainty sources. The study was performed on two small catchments in the Swiss Plateau with a lumped conceptual rainfall runoff model. In the climatic part we applied the standard ensemble approach to quantify uncertainty but in hydrology we used formal Bayesian uncertainty assessment with two different likelihood functions. One was a time series error model that was able to deal with the complicated statistical properties of hydrological model residuals. The second was an approximate likelihood function for the flow quantiles. The results showed that the expected climatic impact on flow quantiles was small compared to prediction uncertainty. The choice of uncertainty assessment method actually determined what sources of uncertainty could be identified at all. This demonstrated that one could arrive at rather different conclusions about the causes behind predictive uncertainty for the same hydrological model and calibration data when considering different objective functions for calibration.
NASA Astrophysics Data System (ADS)
Farrell, Kathryn; Oden, J. Tinsley; Faghihi, Danial
2015-08-01
A general adaptive modeling algorithm for selection and validation of coarse-grained models of atomistic systems is presented. A Bayesian framework is developed to address uncertainties in parameters, data, and model selection. Algorithms for computing output sensitivities to parameter variances, model evidence and posterior model plausibilities for given data, and for computing what are referred to as Occam Categories in reference to a rough measure of model simplicity, make up components of the overall approach. Computational results are provided for representative applications.
NASA Astrophysics Data System (ADS)
Feyen, Luc; Caers, Jef
2006-06-01
In this work, we address the problem of characterizing the heterogeneity and uncertainty of hydraulic properties for complex geological settings. Hereby, we distinguish between two scales of heterogeneity, namely the hydrofacies structure and the intrafacies variability of the hydraulic properties. We employ multiple-point geostatistics to characterize the hydrofacies architecture. The multiple-point statistics are borrowed from a training image that is designed to reflect the prior geological conceptualization. The intrafacies variability of the hydraulic properties is represented using conventional two-point correlation methods, more precisely, spatial covariance models under a multi-Gaussian spatial law. We address the different levels and sources of uncertainty in characterizing the subsurface heterogeneity, and explore their effect on groundwater flow and transport predictions. Typically, uncertainty is assessed by way of many images, termed realizations, of a fixed statistical model. However, in many cases, sampling from a fixed stochastic model does not adequately represent the space of uncertainty. It neglects the uncertainty related to the selection of the stochastic model and the estimation of its input parameters. We acknowledge the uncertainty inherent in the definition of the prior conceptual model of aquifer architecture and in the estimation of global statistics, anisotropy, and correlation scales. Spatial bootstrap is used to assess the uncertainty of the unknown statistical parameters. As an illustrative example, we employ a synthetic field that represents a fluvial setting consisting of an interconnected network of channel sands embedded within finer-grained floodplain material. For this highly non-stationary setting we quantify the groundwater flow and transport model prediction uncertainty for various levels of hydrogeological uncertainty. Results indicate the importance of accurately describing the facies geometry, especially for transport predictions.
Heikkinen, Risto K; Bocedi, Greta; Kuussaari, Mikko; Heliölä, Janne; Leikola, Niko; Pöyry, Juha; Travis, Justin M J
2014-01-01
Dynamic models for range expansion provide a promising tool for assessing species' capacity to respond to climate change by shifting their ranges to new areas. However, these models include a number of uncertainties which may affect how successfully they can be applied to climate change oriented conservation planning. We used RangeShifter, a novel dynamic and individual-based modelling platform, to study two potential sources of such uncertainties: the selection of land cover data and the parameterization of key life-history traits. As an example, we modelled the range expansion dynamics of two butterfly species, one habitat specialist (Maniola jurtina) and one generalist (Issoria lathonia). Our results show that projections of total population size, number of occupied grid cells and the mean maximal latitudinal range shift were all clearly dependent on the choice made between using CORINE land cover data vs. using more detailed grassland data from three alternative national databases. Range expansion was also sensitive to the parameterization of the four considered life-history traits (magnitude and probability of long-distance dispersal events, population growth rate and carrying capacity), with carrying capacity and magnitude of long-distance dispersal showing the strongest effect. Our results highlight the sensitivity of dynamic species population models to the selection of existing land cover data and to uncertainty in the model parameters and indicate that these need to be carefully evaluated before the models are applied to conservation planning.
NASA Astrophysics Data System (ADS)
Vilhelmsen, Troels N.; Ferré, Ty P. A.
2016-04-01
Hydrological models are often developed to forecasting future behavior in response due to natural or human induced changes in stresses affecting hydrologic systems. Commonly, these models are conceptualized and calibrated based on existing data/information about the hydrological conditions. However, most hydrologic systems lack sufficient data to constrain models with adequate certainty to support robust decision making. Therefore, a key element of a hydrologic study is the selection of additional data to improve model performance. Given the nature of hydrologic investigations, it is not practical to select data sequentially, i.e. to choose the next observation, collect it, refine the model, and then repeat the process. Rather, for timing and financial reasons, measurement campaigns include multiple wells or sampling points. There is a growing body of literature aimed at defining the expected data worth based on existing models. However, these are almost all limited to identifying single additional observations. In this study, we present a methodology for simultaneously selecting multiple potential new observations based on their expected ability to reduce the uncertainty of the forecasts of interest. This methodology is based on linear estimates of the predictive uncertainty, and it can be used to determine the optimal combinations of measurements (location and number) established to reduce the uncertainty of multiple predictions. The outcome of the analysis is an estimate of the optimal sampling locations; the optimal number of samples; as well as a probability map showing the locations within the investigated area that are most likely to provide useful information about the forecasting of interest.
Assessment of parametric uncertainty for groundwater reactive transport modeling,
Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun
2014-01-01
The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.
Climatic Models Ensemble-based Mid-21st Century Runoff Projections: A Bayesian Framework
NASA Astrophysics Data System (ADS)
Achieng, K. O.; Zhu, J.
2017-12-01
There are a number of North American Regional Climate Change Assessment Program (NARCCAP) climatic models that have been used to project surface runoff in the mid-21st century. Statistical model selection techniques are often used to select the model that best fits data. However, model selection techniques often lead to different conclusions. In this study, ten models are averaged in Bayesian paradigm to project runoff. Bayesian Model Averaging (BMA) is used to project and identify effect of model uncertainty on future runoff projections. Baseflow separation - a two-digital filter which is also called Eckhardt filter - is used to separate USGS streamflow (total runoff) into two components: baseflow and surface runoff. We use this surface runoff as the a priori runoff when conducting BMA of runoff simulated from the ten RCM models. The primary objective of this study is to evaluate how well RCM multi-model ensembles simulate surface runoff, in a Bayesian framework. Specifically, we investigate and discuss the following questions: How well do ten RCM models ensemble jointly simulate surface runoff by averaging over all the models using BMA, given a priori surface runoff? What are the effects of model uncertainty on surface runoff simulation?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, J.; Moteabbed, M.; Paganetti, H., E-mail: hpaganetti@mgh.harvard.edu
2015-01-15
Purpose: Theoretical dose–response models offer the possibility to assess second cancer induction risks after external beam therapy. The parameters used in these models are determined with limited data from epidemiological studies. Risk estimations are thus associated with considerable uncertainties. This study aims at illustrating uncertainties when predicting the risk for organ-specific second cancers in the primary radiation field illustrated by choosing selected treatment plans for brain cancer patients. Methods: A widely used risk model was considered in this study. The uncertainties of the model parameters were estimated with reported data of second cancer incidences for various organs. Standard error propagationmore » was then subsequently applied to assess the uncertainty in the risk model. Next, second cancer risks of five pediatric patients treated for cancer in the head and neck regions were calculated. For each case, treatment plans for proton and photon therapy were designed to estimate the uncertainties (a) in the lifetime attributable risk (LAR) for a given treatment modality and (b) when comparing risks of two different treatment modalities. Results: Uncertainties in excess of 100% of the risk were found for almost all organs considered. When applied to treatment plans, the calculated LAR values have uncertainties of the same magnitude. A comparison between cancer risks of different treatment modalities, however, does allow statistically significant conclusions. In the studied cases, the patient averaged LAR ratio of proton and photon treatments was 0.35, 0.56, and 0.59 for brain carcinoma, brain sarcoma, and bone sarcoma, respectively. Their corresponding uncertainties were estimated to be potentially below 5%, depending on uncertainties in dosimetry. Conclusions: The uncertainty in the dose–response curve in cancer risk models makes it currently impractical to predict the risk for an individual external beam treatment. On the other hand, the ratio of absolute risks between two modalities is less sensitive to the uncertainties in the risk model and can provide statistically significant estimates.« less
Gannon, Jill J.; Moore, Clinton T.; Shaffer, Terry L.; Flanders-Wanner, Bridgette
2011-01-01
Much of the native prairie managed by the U.S. Fish and Wildlife Service (Service) in the Prairie Pothole Region (PPR) is extensively invaded by the introduced cool-season grasses smooth brome (Bromus inermis) and Kentucky bluegrass (Poa pratensis). The central challenge to managers is selecting appropriate management actions in the face of biological and environmental uncertainties. We describe the technical components of a USGS management project, and explain how the components integrate and inform each other, how data feedback from individual cooperators serves to reduce uncertainty across the whole region, and how a successful adaptive management project is coordinated and maintained on a large scale. In partnership with the Service, the U.S. Geological Survey is developing an adaptive decision support framework to assist managers in selecting management actions under uncertainty and maximizing learning from management outcomes. The framework is built around practical constraints faced by refuge managers and includes identification of the management objective and strategies, analysis of uncertainty and construction of competing decision models, monitoring, and mechanisms for model feedback and decision selection. Nineteen Service field stations, spanning four states of the PPR, are participating in the project. They share a common management objective, available management strategies, and biological uncertainties. While the scope is broad, the project interfaces with individual land managers who provide refuge-specific information and receive updated decision guidance that incorporates understanding gained from the collective experience of all cooperators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell, Kathryn, E-mail: kfarrell@ices.utexas.edu; Oden, J. Tinsley, E-mail: oden@ices.utexas.edu; Faghihi, Danial, E-mail: danial@ices.utexas.edu
A general adaptive modeling algorithm for selection and validation of coarse-grained models of atomistic systems is presented. A Bayesian framework is developed to address uncertainties in parameters, data, and model selection. Algorithms for computing output sensitivities to parameter variances, model evidence and posterior model plausibilities for given data, and for computing what are referred to as Occam Categories in reference to a rough measure of model simplicity, make up components of the overall approach. Computational results are provided for representative applications.
Economic and environmental costs of regulatory uncertainty for coal-fired power plants.
Patiño-Echeverri, Dalia; Fischbeck, Paul; Kriegler, Elmar
2009-02-01
Uncertainty about the extent and timing of CO2 emissions regulations for the electricity-generating sector exacerbates the difficulty of selecting investment strategies for retrofitting or alternatively replacing existent coal-fired power plants. This may result in inefficient investments imposing economic and environmental costs to society. In this paper, we construct a multiperiod decision model with an embedded multistage stochastic dynamic program minimizing the expected total costs of plant operation, installations, and pollution allowances. We use the model to forecast optimal sequential investment decisions of a power plant operator with and without uncertainty about future CO2 allowance prices. The comparison of the two cases demonstrates that uncertainty on future CO2 emissions regulations might cause significant economic costs and higher air emissions.
Vanderborght, Jan; Tiktak, Aaldrik; Boesten, Jos J T I; Vereecken, Harry
2011-03-01
For the registration of pesticides in the European Union, model simulations for worst-case scenarios are used to demonstrate that leaching concentrations to groundwater do not exceed a critical threshold. A worst-case scenario is a combination of soil and climate properties for which predicted leaching concentrations are higher than a certain percentile of the spatial concentration distribution within a region. The derivation of scenarios is complicated by uncertainty about soil and pesticide fate parameters. As the ranking of climate and soil property combinations according to predicted leaching concentrations is different for different pesticides, the worst-case scenario for one pesticide may misrepresent the worst case for another pesticide, which leads to 'scenario uncertainty'. Pesticide fate parameter uncertainty led to higher concentrations in the higher percentiles of spatial concentration distributions, especially for distributions in smaller and more homogeneous regions. The effect of pesticide fate parameter uncertainty on the spatial concentration distribution was small when compared with the uncertainty of local concentration predictions and with the scenario uncertainty. Uncertainty in pesticide fate parameters and scenario uncertainty can be accounted for using higher percentiles of spatial concentration distributions and considering a range of pesticides for the scenario selection. Copyright © 2010 Society of Chemical Industry.
Equifinality and process-based modelling
NASA Astrophysics Data System (ADS)
Khatami, S.; Peel, M. C.; Peterson, T. J.; Western, A. W.
2017-12-01
Equifinality is understood as one of the fundamental difficulties in the study of open complex systems, including catchment hydrology. A review of the hydrologic literature reveals that the term equifinality has been widely used, but in many cases inconsistently and without coherent recognition of the various facets of equifinality, which can lead to ambiguity but also methodological fallacies. Therefore, in this study we first characterise the term equifinality within the context of hydrological modelling by reviewing the genesis of the concept of equifinality and then presenting a theoretical framework. During past decades, equifinality has mainly been studied as a subset of aleatory (arising due to randomness) uncertainty and for the assessment of model parameter uncertainty. Although the connection between parameter uncertainty and equifinality is undeniable, we argue there is more to equifinality than just aleatory parameter uncertainty. That is, the importance of equifinality and epistemic uncertainty (arising due to lack of knowledge) and their implications is overlooked in our current practice of model evaluation. Equifinality and epistemic uncertainty in studying, modelling, and evaluating hydrologic processes are treated as if they can be simply discussed in (or often reduced to) probabilistic terms (as for aleatory uncertainty). The deficiencies of this approach to conceptual rainfall-runoff modelling are demonstrated for selected Australian catchments by examination of parameter and internal flux distributions and interactions within SIMHYD. On this basis, we present a new approach that expands equifinality concept beyond model parameters to inform epistemic uncertainty. The new approach potentially facilitates the identification and development of more physically plausible models and model evaluation schemes particularly within the multiple working hypotheses framework, and is generalisable to other fields of environmental modelling as well.
NASA Astrophysics Data System (ADS)
Li, B.; Lee, H. C.; Duan, X.; Shen, C.; Zhou, L.; Jia, X.; Yang, M.
2017-09-01
The dual-energy CT-based (DECT) approach holds promise in reducing the overall uncertainty in proton stopping-power-ratio (SPR) estimation as compared to the conventional stoichiometric calibration approach. The objective of this study was to analyze the factors contributing to uncertainty in SPR estimation using the DECT-based approach and to derive a comprehensive estimate of the range uncertainty associated with SPR estimation in treatment planning. Two state-of-the-art DECT-based methods were selected and implemented on a Siemens SOMATOM Force DECT scanner. The uncertainties were first divided into five independent categories. The uncertainty associated with each category was estimated for lung, soft and bone tissues separately. A single composite uncertainty estimate was eventually determined for three tumor sites (lung, prostate and head-and-neck) by weighting the relative proportion of each tissue group for that specific site. The uncertainties associated with the two selected DECT methods were found to be similar, therefore the following results applied to both methods. The overall uncertainty (1σ) in SPR estimation with the DECT-based approach was estimated to be 3.8%, 1.2% and 2.0% for lung, soft and bone tissues, respectively. The dominant factor contributing to uncertainty in the DECT approach was the imaging uncertainties, followed by the DECT modeling uncertainties. Our study showed that the DECT approach can reduce the overall range uncertainty to approximately 2.2% (2σ) in clinical scenarios, in contrast to the previously reported 1%.
Optimization Under Uncertainty of Site-Specific Turbine Configurations
NASA Astrophysics Data System (ADS)
Quick, J.; Dykes, K.; Graf, P.; Zahle, F.
2016-09-01
Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. If there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtained with increasing risk aversion on the part of the designer.
NASA Astrophysics Data System (ADS)
Ginn, T. R.; Scheibe, T. D.
2006-12-01
Hydrogeology is among the most data-limited of the earth sciences, so that uncertainty arises in every aspect of subsurface flow and transport modeling, from conceptual model to spatial discretization to parameter values. Thus treatment of uncertainty is unavoidable, and the literature and conference proceedings are replete with approaches, templates, paradigms and such for doing so. However, such tools remain not well used, especially those of the stochastic analytic sort, leading recently to explicit inquiries about why this is the case, in response to which entire journal issues have been dedicated. In an effort to continue this discussion in a constructive way we report on an informal yet extensive survey of hydrogeology practitioners, as the "marketplace" for techniques to deal with uncertainty. We include scientists, engineers, regulators, and others in the survey, that reports on quantitative (or not) methods for uncertainty characterization and analysis, frequency and level of usage, and reasons behind the selection or avoidance of available methods. Results shed light on fruitful directions for future research in uncertainty quantification in hydrogeology.
Shao, Kan; Small, Mitchell J
2011-10-01
A methodology is presented for assessing the information value of an additional dosage experiment in existing bioassay studies. The analysis demonstrates the potential reduction in the uncertainty of toxicity metrics derived from expanded studies, providing insights for future studies. Bayesian methods are used to fit alternative dose-response models using Markov chain Monte Carlo (MCMC) simulation for parameter estimation and Bayesian model averaging (BMA) is used to compare and combine the alternative models. BMA predictions for benchmark dose (BMD) are developed, with uncertainty in these predictions used to derive the lower bound BMDL. The MCMC and BMA results provide a basis for a subsequent Monte Carlo analysis that backcasts the dosage where an additional test group would have been most beneficial in reducing the uncertainty in the BMD prediction, along with the magnitude of the expected uncertainty reduction. Uncertainty reductions are measured in terms of reduced interval widths of predicted BMD values and increases in BMDL values that occur as a result of this reduced uncertainty. The methodology is illustrated using two existing data sets for TCDD carcinogenicity, fitted with two alternative dose-response models (logistic and quantal-linear). The example shows that an additional dose at a relatively high value would have been most effective for reducing the uncertainty in BMA BMD estimates, with predicted reductions in the widths of uncertainty intervals of approximately 30%, and expected increases in BMDL values of 5-10%. The results demonstrate that dose selection for studies that subsequently inform dose-response models can benefit from consideration of how these models will be fit, combined, and interpreted. © 2011 Society for Risk Analysis.
Edler, Lutz; Hart, Andy; Greaves, Peter; Carthew, Philip; Coulet, Myriam; Boobis, Alan; Williams, Gary M; Smith, Benjamin
2014-08-01
This article addresses a number of concepts related to the selection and modelling of carcinogenicity data for the calculation of a Margin of Exposure. It follows up on the recommendations put forward by the International Life Sciences Institute - European branch in 2010 on the application of the Margin of Exposure (MoE) approach to substances in food that are genotoxic and carcinogenic. The aims are to provide practical guidance on the relevance of animal tumour data for human carcinogenic hazard assessment, appropriate selection of tumour data for Benchmark Dose Modelling, and approaches for dealing with the uncertainty associated with the selection of data for modelling and, consequently, the derived Point of Departure (PoD) used to calculate the MoE. Although the concepts outlined in this article are interrelated, the background expertise needed to address each topic varies. For instance, the expertise needed to make a judgement on biological relevance of a specific tumour type is clearly different to that needed to determine the statistical uncertainty around the data used for modelling a benchmark dose. As such, each topic is dealt with separately to allow those with specialised knowledge to target key areas of guidance and provide a more in-depth discussion on each subject for those new to the concept of the Margin of Exposure approach. Copyright © 2013 ILSI Europe. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Pillai, S. N.; Singh, H.; Panwar, A. S.; Meena, M. S.; Singh, S. V.; Singh, B.; Paudel, G. P.; Baigorria, G. A.; Ruane, A. C.; McDermid, S.; Boote, K. J.; Porter, C.; Valdivia, R. O.
2016-12-01
Integrated assessment of climate change impact on agricultural productivity is a challenge to the scientific community due to uncertainties of input data, particularly the climate, soil, crop calibration and socio-economic dataset. However, the uncertainty due to selection of GCMs is the major source due to complex underlying processes involved in initial as well as the boundary conditions dealt in solving the air-sea interactions. Under Agricultural Modeling Intercomparison and Improvement Project (AgMIP), the Indo-Gangetic Plains Regional Research Team investigated the uncertainties caused due to selection of GCMs through sub-setting based on annual as well as crop-growth period of rice-wheat systems in AgMIP Integrated Assessment methodology. The AgMIP Phase II protocols were used to study the linking of climate-crop-economic models for two study sites Meerut and Karnal to analyse the sensitivity of current production systems to climate change. Climate Change Projections were made using 29 CMIP5 GCMs under RCP4.5 and RCP 8.5 during mid-century period (2040-2069). Two crop models (APSIM & DSSAT) were used. TOA-MD economic model was used for integrated assessment. Based on RAPs (Representative Agricultural Pathways), some of the parameters, which are not possible to get through modeling, derived from literature and interactions with stakeholders incorporated into the TOA-MD model for integrated assessment.
Svolos, Patricia; Tsougos, Ioannis; Kyrgias, Georgios; Kappas, Constantine; Theodorou, Kiki
2011-04-01
In this study we sought to evaluate and accent the importance of radiobiological parameter selection and implementation to the normal tissue complication probability (NTCP) models. The relative seriality (RS) and the Lyman-Kutcher-Burman (LKB) models were studied. For each model, a minimum and maximum set of radiobiological parameter sets was selected from the overall published sets applied in literature and a theoretical mean parameter set was computed. In order to investigate the potential model weaknesses in NTCP estimation and to point out the correct use of model parameters, these sets were used as input to the RS and the LKB model, estimating radiation induced complications for a group of 36 breast cancer patients treated with radiotherapy. The clinical endpoint examined was Radiation Pneumonitis. Each model was represented by a certain dose-response range when the selected parameter sets were applied. Comparing the models with their ranges, a large area of coincidence was revealed. If the parameter uncertainties (standard deviation) are included in the models, their area of coincidence might be enlarged, constraining even greater their predictive ability. The selection of the proper radiobiological parameter set for a given clinical endpoint is crucial. Published parameter values are not definite but should be accompanied by uncertainties, and one should be very careful when applying them to the NTCP models. Correct selection and proper implementation of published parameters provides a quite accurate fit of the NTCP models to the considered endpoint.
Merging information from multi-model flood projections in a hierarchical Bayesian framework
NASA Astrophysics Data System (ADS)
Le Vine, Nataliya
2016-04-01
Multi-model ensembles are becoming widely accepted for flood frequency change analysis. The use of multiple models results in large uncertainty around estimates of flood magnitudes, due to both uncertainty in model selection and natural variability of river flow. The challenge is therefore to extract the most meaningful signal from the multi-model predictions, accounting for both model quality and uncertainties in individual model estimates. The study demonstrates the potential of a recently proposed hierarchical Bayesian approach to combine information from multiple models. The approach facilitates explicit treatment of shared multi-model discrepancy as well as the probabilistic nature of the flood estimates, by treating the available models as a sample from a hypothetical complete (but unobserved) set of models. The advantages of the approach are: 1) to insure an adequate 'baseline' conditions with which to compare future changes; 2) to reduce flood estimate uncertainty; 3) to maximize use of statistical information in circumstances where multiple weak predictions individually lack power, but collectively provide meaningful information; 4) to adjust multi-model consistency criteria when model biases are large; and 5) to explicitly consider the influence of the (model performance) stationarity assumption. Moreover, the analysis indicates that reducing shared model discrepancy is the key to further reduction of uncertainty in the flood frequency analysis. The findings are of value regarding how conclusions about changing exposure to flooding are drawn, and to flood frequency change attribution studies.
Hydrologic Model Selection using Markov chain Monte Carlo methods
NASA Astrophysics Data System (ADS)
Marshall, L.; Sharma, A.; Nott, D.
2002-12-01
Estimation of parameter uncertainty (and in turn model uncertainty) allows assessment of the risk in likely applications of hydrological models. Bayesian statistical inference provides an ideal means of assessing parameter uncertainty whereby prior knowledge about the parameter is combined with information from the available data to produce a probability distribution (the posterior distribution) that describes uncertainty about the parameter and serves as a basis for selecting appropriate values for use in modelling applications. Widespread use of Bayesian techniques in hydrology has been hindered by difficulties in summarizing and exploring the posterior distribution. These difficulties have been largely overcome by recent advances in Markov chain Monte Carlo (MCMC) methods that involve random sampling of the posterior distribution. This study presents an adaptive MCMC sampling algorithm which has characteristics that are well suited to model parameters with a high degree of correlation and interdependence, as is often evident in hydrological models. The MCMC sampling technique is used to compare six alternative configurations of a commonly used conceptual rainfall-runoff model, the Australian Water Balance Model (AWBM), using 11 years of daily rainfall runoff data from the Bass river catchment in Australia. The alternative configurations considered fall into two classes - those that consider model errors to be independent of prior values, and those that model the errors as an autoregressive process. Each such class consists of three formulations that represent increasing levels of complexity (and parameterisation) of the original model structure. The results from this study point both to the importance of using Bayesian approaches in evaluating model performance, as well as the simplicity of the MCMC sampling framework that has the ability to bring such approaches within the reach of the applied hydrological community.
Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation
NASA Astrophysics Data System (ADS)
Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.
Multi-criteria evaluation of wastewater treatment plant control strategies under uncertainty.
Flores-Alsina, Xavier; Rodríguez-Roda, Ignasi; Sin, Gürkan; Gernaey, Krist V
2008-11-01
The evaluation of activated sludge control strategies in wastewater treatment plants (WWTP) via mathematical modelling is a complex activity because several objectives; e.g. economic, environmental, technical and legal; must be taken into account at the same time, i.e. the evaluation of the alternatives is a multi-criteria problem. Activated sludge models are not well characterized and some of the parameters can present uncertainty, e.g. the influent fractions arriving to the facility and the effect of either temperature or toxic compounds on the kinetic parameters, having a strong influence in the model predictions used during the evaluation of the alternatives and affecting the resulting rank of preferences. Using a simplified version of the IWA Benchmark Simulation Model No. 2 as a case study, this article shows the variations in the decision making when the uncertainty in activated sludge model (ASM) parameters is either included or not during the evaluation of WWTP control strategies. This paper comprises two main sections. Firstly, there is the evaluation of six WWTP control strategies using multi-criteria decision analysis setting the ASM parameters at their default value. In the following section, the uncertainty is introduced, i.e. input uncertainty, which is characterized by probability distribution functions based on the available process knowledge. Next, Monte Carlo simulations are run to propagate input through the model and affect the different outcomes. Thus (i) the variation in the overall degree of satisfaction of the control objectives for the generated WWTP control strategies is quantified, (ii) the contributions of environmental, legal, technical and economic objectives to the existing variance are identified and finally (iii) the influence of the relative importance of the control objectives during the selection of alternatives is analyzed. The results show that the control strategies with an external carbon source reduce the output uncertainty in the criteria used to quantify the degree of satisfaction of environmental, technical and legal objectives, but increasing the economical costs and their variability as a trade-off. Also, it is shown how a preliminary selected alternative with cascade ammonium controller becomes less desirable when input uncertainty is included, having simpler alternatives more chance of success.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cardoni, Jeffrey N.; Kalinich, Donald A.
2014-02-01
Sandia National Laboratories (SNL) plans to conduct uncertainty analyses (UA) on the Fukushima Daiichi unit (1F1) plant with the MELCOR code. The model to be used was developed for a previous accident reconstruction investigation jointly sponsored by the US Department of Energy (DOE) and Nuclear Regulatory Commission (NRC). However, that study only examined a handful of various model inputs and boundary conditions, and the predictions yielded only fair agreement with plant data and current release estimates. The goal of this uncertainty study is to perform a focused evaluation of uncertainty in core melt progression behavior and its effect on keymore » figures-of-merit (e.g., hydrogen production, vessel lower head failure, etc.). In preparation for the SNL Fukushima UA work, a scoping study has been completed to identify important core melt progression parameters for the uncertainty analysis. The study also lays out a preliminary UA methodology.« less
NASA Astrophysics Data System (ADS)
Brannan, K. M.; Somor, A.
2016-12-01
A variety of statistics are used to assess watershed model performance but these statistics do not directly answer the question: what is the uncertainty of my prediction. Understanding predictive uncertainty is important when using a watershed model to develop a Total Maximum Daily Load (TMDL). TMDLs are a key component of the US Clean Water Act and specify the amount of a pollutant that can enter a waterbody when the waterbody meets water quality criteria. TMDL developers use watershed models to estimate pollutant loads from nonpoint sources of pollution. We are developing a TMDL for bacteria impairments in a watershed in the Coastal Range of Oregon. We setup an HSPF model of the watershed and used the calibration software PEST to estimate HSPF hydrologic parameters and then perform predictive uncertainty analysis of stream flow. We used Monte-Carlo simulation to run the model with 1,000 different parameter sets and assess predictive uncertainty. In order to reduce the chance of specious parameter sets, we accounted for the relationships among parameter values by using mathematically-based regularization techniques and an estimate of the parameter covariance when generating random parameter sets. We used a novel approach to select flow data for predictive uncertainty analysis. We set aside flow data that occurred on days that bacteria samples were collected. We did not use these flows in the estimation of the model parameters. We calculated a percent uncertainty for each flow observation based 1,000 model runs. We also used several methods to visualize results with an emphasis on making the data accessible to both technical and general audiences. We will use the predictive uncertainty estimates in the next phase of our work, simulating bacteria fate and transport in the watershed.
SELECTION AND CALIBRATION OF SUBSURFACE REACTIVE TRANSPORT MODELS USING A SURROGATE-MODEL APPROACH
While standard techniques for uncertainty analysis have been successfully applied to groundwater flow models, extension to reactive transport is frustrated by numerous difficulties, including excessive computational burden and parameter non-uniqueness. This research introduces a...
Multivariate Probabilistic Analysis of an Hydrological Model
NASA Astrophysics Data System (ADS)
Franceschini, Samuela; Marani, Marco
2010-05-01
Model predictions derived based on rainfall measurements and hydrological model results are often limited by the systematic error of measuring instruments, by the intrinsic variability of the natural processes and by the uncertainty of the mathematical representation. We propose a means to identify such sources of uncertainty and to quantify their effects based on point-estimate approaches, as a valid alternative to cumbersome Montecarlo methods. We present uncertainty analyses on the hydrologic response to selected meteorological events, in the mountain streamflow-generating portion of the Brenta basin at Bassano del Grappa, Italy. The Brenta river catchment has a relatively uniform morphology and quite a heterogeneous rainfall-pattern. In the present work, we evaluate two sources of uncertainty: data uncertainty (the uncertainty due to data handling and analysis) and model uncertainty (the uncertainty related to the formulation of the model). We thus evaluate the effects of the measurement error of tipping-bucket rain gauges, the uncertainty in estimating spatially-distributed rainfall through block kriging, and the uncertainty associated with estimated model parameters. To this end, we coupled a deterministic model based on the geomorphological theory of the hydrologic response to probabilistic methods. In particular we compare the results of Monte Carlo Simulations (MCS) to the results obtained, in the same conditions, using Li's Point Estimate Method (LiM). The LiM is a probabilistic technique that approximates the continuous probability distribution function of the considered stochastic variables by means of discrete points and associated weights. This allows to satisfactorily reproduce results with only few evaluations of the model function. The comparison between the LiM and MCS results highlights the pros and cons of using an approximating method. LiM is less computationally demanding than MCS, but has limited applicability especially when the model response is highly nonlinear. Higher-order approximations can provide more accurate estimations, but reduce the numerical advantage of the LiM. The results of the uncertainty analysis identify the main sources of uncertainty in the computation of river discharge. In this particular case the spatial variability of rainfall and the model parameters uncertainty are shown to have the greatest impact on discharge evaluation. This, in turn, highlights the need to support any estimated hydrological response with probability information and risk analysis results in order to provide a robust, systematic framework for decision making.
Optimization Under Uncertainty of Site-Specific Turbine Configurations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quick, J.; Dykes, K.; Graf, P.
Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. Lastly, if there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtainedmore » with increasing risk aversion on the part of the designer.« less
Optimization under Uncertainty of Site-Specific Turbine Configurations: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quick, Julian; Dykes, Katherine; Graf, Peter
Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. If there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtained withmore » increasing risk aversion on the part of the designer.« less
Optimization Under Uncertainty of Site-Specific Turbine Configurations
Quick, J.; Dykes, K.; Graf, P.; ...
2016-10-03
Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. Lastly, if there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtainedmore » with increasing risk aversion on the part of the designer.« less
NASA Astrophysics Data System (ADS)
Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe
2016-11-01
Given the ever increasing number of climate change simulations being carried out, it has become impractical to use all of them to cover the uncertainty of climate change impacts. Various methods have been proposed to optimally select subsets of a large ensemble of climate simulations for impact studies. However, the behaviour of optimally-selected subsets of climate simulations for climate change impacts is unknown, since the transfer process from climate projections to the impact study world is usually highly non-linear. Consequently, this study investigates the transferability of optimally-selected subsets of climate simulations in the case of hydrological impacts. Two different methods were used for the optimal selection of subsets of climate scenarios, and both were found to be capable of adequately representing the spread of selected climate model variables contained in the original large ensemble. However, in both cases, the optimal subsets had limited transferability to hydrological impacts. To capture a similar variability in the impact model world, many more simulations have to be used than those that are needed to simply cover variability from the climate model variables' perspective. Overall, both optimal subset selection methods were better than random selection when small subsets were selected from a large ensemble for impact studies. However, as the number of selected simulations increased, random selection often performed better than the two optimal methods. To ensure adequate uncertainty coverage, the results of this study imply that selecting as many climate change simulations as possible is the best avenue. Where this was not possible, the two optimal methods were found to perform adequately.
NASA Astrophysics Data System (ADS)
Silva, F. E. O. E.; Naghettini, M. D. C.; Fernandes, W.
2014-12-01
This paper evaluated the uncertainties associated with the estimation of the parameters of a conceptual rainfall-runoff model, through the use of Bayesian inference techniques by Monte Carlo simulation. The Pará River sub-basin, located in the upper São Francisco river basin, in southeastern Brazil, was selected for developing the studies. In this paper, we used the Rio Grande conceptual hydrologic model (EHR/UFMG, 2001) and the Markov Chain Monte Carlo simulation method named DREAM (VRUGT, 2008a). Two probabilistic models for the residues were analyzed: (i) the classic [Normal likelihood - r ≈ N (0, σ²)]; and (ii) a generalized likelihood (SCHOUPS & VRUGT, 2010), in which it is assumed that the differences between observed and simulated flows are correlated, non-stationary, and distributed as a Skew Exponential Power density. The assumptions made for both models were checked to ensure that the estimation of uncertainties in the parameters was not biased. The results showed that the Bayesian approach proved to be adequate to the proposed objectives, enabling and reinforcing the importance of assessing the uncertainties associated with hydrological modeling.
The Sim-SEQ Project: Comparison of Selected Flow Models for the S-3 Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukhopadhyay, Sumit; Doughty, Christine A.; Bacon, Diana H.
Sim-SEQ is an international initiative on model comparison for geologic carbon sequestration, with an objective to understand and, if possible, quantify model uncertainties. Model comparison efforts in Sim-SEQ are at present focusing on one specific field test site, hereafter referred to as the Sim-SEQ Study site (or S-3 site). Within Sim-SEQ, different modeling teams are developing conceptual models of CO2 injection at the S-3 site. In this paper, we select five flow models of the S-3 site and provide a qualitative comparison of their attributes and predictions. These models are based on five different simulators or modeling approaches: TOUGH2/EOS7C, STOMP-CO2e,more » MoReS, TOUGH2-MP/ECO2N, and VESA. In addition to model-to-model comparison, we perform a limited model-to-data comparison, and illustrate how model choices impact model predictions. We conclude the paper by making recommendations for model refinement that are likely to result in less uncertainty in model predictions.« less
A study of undue pain and surfing: using hierarchical criteria to assess website quality.
Lorence, Daniel; Abraham, Joanna
2008-09-01
In studies of web-based consumer health information, scant attention has been paid to the selective development of differential methodologies for website quality evaluation, or to selective grouping and analysis of specific ;domains of uncertainty' in healthcare. Our objective is to introduce a more refined model for website evaluation, and illustrate its application using assessment of websites within an area of ongoing medical uncertainty, back pain. In this exploratory technology assessment, we suggest a model for assessing these ;domains of uncertainty' within healthcare, using qualitative assessment of websites and hierarchical concepts. Using such a hierarchy of quality criteria, we review medical information provided by the most frequently accessed websites related to back pain. Websites are evaluated using standardized criteria, with results rated from the viewpoint of the consumer. Results show that standardization of quality rating across subjective content, and between commercial and niche search results, can provide a consumer-friendly dimension to health information.
How Many Fish Need to Be Measured to Effectively Evaluate Trawl Selectivity?
Santos, Juan; Sala, Antonello
2016-01-01
The aim of this study was to provide practitioners working with trawl selectivity with general and easily understandable guidelines regarding the fish sampling effort necessary during sea trials. In particular, we focused on how many fish would need to be caught and length measured in a trawl haul in order to assess the selectivity parameters of the trawl at a designated uncertainty level. We also investigated the dependency of this uncertainty level on the experimental method used to collect data and on the potential effects of factors such as the size structure in the catch relative to the size selection of the gear. We based this study on simulated data created from two different fisheries: the Barents Sea cod (Gadus morhua) trawl fishery and the Mediterranean Sea multispecies trawl fishery represented by red mullet (Mullus barbatus). We used these two completely different fisheries to obtain results that can be used as general guidelines for other fisheries. We found that the uncertainty in the selection parameters decreased with increasing number of fish measured and that this relationship could be described by a power model. The sampling effort needed to achieve a specific uncertainty level for the selection parameters was always lower for the covered codend method compared to the paired-gear method. In many cases, the number of fish that would need to be measured to maintain a specific uncertainty level was around 10 times higher for the paired-gear method than for the covered codend method. The trends observed for the effect of sampling effort in the two fishery cases investigated were similar; therefore the guidelines presented herein should be applicable to other fisheries. PMID:27560696
Niemansburg, Sophie L; Habets, Michelle G J L; Dhert, Wouter J A; van Delden, Johannes J M; Bredenoord, Annelien L
2015-11-01
The innovative field of Regenerative Medicine (RM) is expected to extend the possibilities of prevention or early treatment in healthcare. Increasingly, clinical trials will be developed for people at risk of disease to investigate these RM interventions. These individuals at risk are characterised by their susceptibility for developing clinically manifest disease in future due to the existence of degenerative abnormalities. So far, there has been little debate about the ethical appropriateness of including such individuals at risk in clinical trials. We discuss three main challenges of selecting this participant model for testing RM interventions: the challenge of achieving a proportional risk-benefit balance; complexities in the trial design in terms of follow-up and sample size; and the difficulty of obtaining informed consent due to the many uncertainties. We conclude that selecting the model is not ethically justifiable for first-in-man trials with RM interventions due to the high risks and uncertainties. However, the model can be ethically appropriate for testing the efficacy of RM interventions under the following conditions: interventions should be low risk; the degenerative abnormalities (and other risk factors) should be strongly related with disease within a short time frame; robust preclinical evidence of efficacy needs to be present; and the informed consent procedure should contain extra safeguards with regard to communication on uncertainties. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
DeWeber, Jefferson T; Wagner, Tyler
2018-06-01
Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30-day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species' distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold-water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid-century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation actions. Our study demonstrates that even relatively small differences in the definitions of climate metrics can result in very different projections and reveal high uncertainty in predicted climate change effects. © 2018 John Wiley & Sons Ltd.
DeWeber, Jefferson T.; Wagner, Tyler
2018-01-01
Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30‐day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species’ distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold‐water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid‐century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation actions. Our study demonstrates that even relatively small differences in the definitions of climate metrics can result in very different projections and reveal high uncertainty in predicted climate change effects.
Modeling uncertainty in producing natural gas from tight sands
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chermak, J.M.; Dahl, C.A.; Patrick, R.H
1995-12-31
Since accurate geologic, petroleum engineering, and economic information are essential ingredients in making profitable production decisions for natural gas, we combine these ingredients in a dynamic framework to model natural gas reservoir production decisions. We begin with the certainty case before proceeding to consider how uncertainty might be incorporated in the decision process. Our production model uses dynamic optimal control to combine economic information with geological constraints to develop optimal production decisions. To incorporate uncertainty into the model, we develop probability distributions on geologic properties for the population of tight gas sand wells and perform a Monte Carlo study tomore » select a sample of wells. Geological production factors, completion factors, and financial information are combined into the hybrid economic-petroleum reservoir engineering model to determine the optimal production profile, initial gas stock, and net present value (NPV) for an individual well. To model the probability of the production abandonment decision, the NPV data is converted to a binary dependent variable. A logit model is used to model this decision as a function of the above geological and economic data to give probability relationships. Additional ways to incorporate uncertainty into the decision process include confidence intervals and utility theory.« less
NASA Astrophysics Data System (ADS)
Rodríguez-Rincón, J. P.; Pedrozo-Acuña, A.; Breña-Naranjo, J. A.
2015-07-01
This investigation aims to study the propagation of meteorological uncertainty within a cascade modelling approach to flood prediction. The methodology was comprised of a numerical weather prediction (NWP) model, a distributed rainfall-runoff model and a 2-D hydrodynamic model. The uncertainty evaluation was carried out at the meteorological and hydrological levels of the model chain, which enabled the investigation of how errors that originated in the rainfall prediction interact at a catchment level and propagate to an estimated inundation area and depth. For this, a hindcast scenario is utilised removing non-behavioural ensemble members at each stage, based on the fit with observed data. At the hydrodynamic level, an uncertainty assessment was not incorporated; instead, the model was setup following guidelines for the best possible representation of the case study. The selected extreme event corresponds to a flood that took place in the southeast of Mexico during November 2009, for which field data (e.g. rain gauges; discharge) and satellite imagery were available. Uncertainty in the meteorological model was estimated by means of a multi-physics ensemble technique, which is designed to represent errors from our limited knowledge of the processes generating precipitation. In the hydrological model, a multi-response validation was implemented through the definition of six sets of plausible parameters from past flood events. Precipitation fields from the meteorological model were employed as input in a distributed hydrological model, and resulting flood hydrographs were used as forcing conditions in the 2-D hydrodynamic model. The evolution of skill within the model cascade shows a complex aggregation of errors between models, suggesting that in valley-filling events hydro-meteorological uncertainty has a larger effect on inundation depths than that observed in estimated flood inundation extents.
NASA Astrophysics Data System (ADS)
Fakhari, Vahid; Choi, Seung-Bok; Cho, Chang-Hyun
2015-04-01
This work presents a new robust model reference adaptive control (MRAC) for vibration control caused from vehicle engine using an electromagnetic type of active engine mount. Vibration isolation performances of the active mount associated with the robust controller are evaluated in the presence of large uncertainties. As a first step, an active mount with linear solenoid actuator is prepared and its dynamic model is identified via experimental test. Subsequently, a new robust MRAC based on the gradient method with σ-modification is designed by selecting a proper reference model. In designing the robust adaptive control, structured (parametric) uncertainties in the stiffness of the passive part of the mount and in damping ratio of the active part of the mount are considered to investigate the robustness of the proposed controller. Experimental and simulation results are presented to evaluate performance focusing on the robustness behavior of the controller in the face of large uncertainties. The obtained results show that the proposed controller can sufficiently provide the robust vibration control performance even in the presence of large uncertainties showing an effective vibration isolation.
A solution to the static frame validation challenge problem using Bayesian model selection
Grigoriu, M. D.; Field, R. V.
2007-12-23
Within this paper, we provide a solution to the static frame validation challenge problem (see this issue) in a manner that is consistent with the guidelines provided by the Validation Challenge Workshop tasking document. The static frame problem is constructed such that variability in material properties is known to be the only source of uncertainty in the system description, but there is ignorance on the type of model that best describes this variability. Hence both types of uncertainty, aleatoric and epistemic, are present and must be addressed. Our approach is to consider a collection of competing probabilistic models for themore » material properties, and calibrate these models to the information provided; models of different levels of complexity and numerical efficiency are included in the analysis. A Bayesian formulation is used to select the optimal model from the collection, which is then used for the regulatory assessment. Lastly, bayesian credible intervals are used to provide a measure of confidence to our regulatory assessment.« less
The (Un)Certainty of Selectivity in Liquid Chromatography Tandem Mass Spectrometry
NASA Astrophysics Data System (ADS)
Berendsen, Bjorn J. A.; Stolker, Linda A. M.; Nielen, Michel W. F.
2013-01-01
We developed a procedure to determine the "identification power" of an LC-MS/MS method operated in the MRM acquisition mode, which is related to its selectivity. The probability of any compound showing the same precursor ion, product ions, and retention time as the compound of interest is used as a measure of selectivity. This is calculated based upon empirical models constructed from three very large compound databases. Based upon the final probability estimation, additional measures to assure unambiguous identification can be taken, like the selection of different or additional product ions. The reported procedure in combination with criteria for relative ion abundances results in a powerful technique to determine the (un)certainty of the selectivity of any LC-MS/MS analysis and thus the risk of false positive results. Furthermore, the procedure is very useful as a tool to validate method selectivity.
NASA Astrophysics Data System (ADS)
Thomsen, N. I.; Troldborg, M.; McKnight, U. S.; Binning, P. J.; Bjerg, P. L.
2012-04-01
Mass discharge estimates are increasingly being used in the management of contaminated sites. Such estimates have proven useful for supporting decisions related to the prioritization of contaminated sites in a groundwater catchment. Potential management options can be categorised as follows: (1) leave as is, (2) clean up, or (3) further investigation needed. However, mass discharge estimates are often very uncertain, which may hamper the management decisions. If option 1 is incorrectly chosen soil and water quality will decrease, threatening or destroying drinking water resources. The risk of choosing option 2 is to spend money on remediating a site that does not pose a problem. Choosing option 3 will often be safest, but may not be the optimal economic solution. Quantification of the uncertainty in mass discharge estimates can therefore greatly improve the foundation for selecting the appropriate management option. The uncertainty of mass discharge estimates depends greatly on the extent of the site characterization. A good approach for uncertainty estimation will be flexible with respect to the investigation level, and account for both parameter and conceptual model uncertainty. We propose a method for quantifying the uncertainty of dynamic mass discharge estimates from contaminant point sources on the local scale. The method considers both parameter and conceptual uncertainty through a multi-model approach. The multi-model approach evaluates multiple conceptual models for the same site. The different conceptual models consider different source characterizations and hydrogeological descriptions. The idea is to include a set of essentially different conceptual models where each model is believed to be realistic representation of the given site, based on the current level of information. Parameter uncertainty is quantified using Monte Carlo simulations. For each conceptual model we calculate a transient mass discharge estimate with uncertainty bounds resulting from the parametric uncertainty. To quantify the conceptual uncertainty from a given site, we combine the outputs from the different conceptual models using Bayesian model averaging. The weight for each model is obtained by integrating available data and expert knowledge using Bayesian belief networks. The multi-model approach is applied to a contaminated site. At the site a DNAPL (dense non aqueous phase liquid) spill consisting of PCE (perchloroethylene) has contaminated a fractured clay till aquitard overlaying a limestone aquifer. The exact shape and nature of the source is unknown and so is the importance of transport in the fractures. The result of the multi-model approach is a visual representation of the uncertainty of the mass discharge estimates for the site which can be used to support the management options.
Constraining uncertainties in water supply reliability in a tropical data scarce basin
NASA Astrophysics Data System (ADS)
Kaune, Alexander; Werner, Micha; Rodriguez, Erasmo; de Fraiture, Charlotte
2015-04-01
Assessing the water supply reliability in river basins is essential for adequate planning and development of irrigated agriculture and urban water systems. In many cases hydrological models are applied to determine the surface water availability in river basins. However, surface water availability and variability is often not appropriately quantified due to epistemic uncertainties, leading to water supply insecurity. The objective of this research is to determine the water supply reliability in order to support planning and development of irrigated agriculture in a tropical, data scarce environment. The approach proposed uses a simple hydrological model, but explicitly includes model parameter uncertainty. A transboundary river basin in the tropical region of Colombia and Venezuela with an approximately area of 2100 km² was selected as a case study. The Budyko hydrological framework was extended to consider climatological input variability and model parameter uncertainty, and through this the surface water reliability to satisfy the irrigation and urban demand was estimated. This provides a spatial estimate of the water supply reliability across the basin. For the middle basin the reliability was found to be less than 30% for most of the months when the water is extracted from an upstream source. Conversely, the monthly water supply reliability was high (r>98%) in the lower basin irrigation areas when water was withdrawn from a source located further downstream. Including model parameter uncertainty provides a complete estimate of the water supply reliability, but that estimate is influenced by the uncertainty in the model. Reducing the uncertainty in the model through improved data and perhaps improved model structure will improve the estimate of the water supply reliability allowing better planning of irrigated agriculture and dependable water allocation decisions.
Funamizu, Akihiro; Ito, Makoto; Doya, Kenji; Kanzaki, Ryohei; Takahashi, Hirokazu
2012-01-01
The estimation of reward outcomes for action candidates is essential for decision making. In this study, we examined whether and how the uncertainty in reward outcome estimation affects the action choice and learning rate. We designed a choice task in which rats selected either the left-poking or right-poking hole and received a reward of a food pellet stochastically. The reward probabilities of the left and right holes were chosen from six settings (high, 100% vs. 66%; mid, 66% vs. 33%; low, 33% vs. 0% for the left vs. right holes, and the opposites) in every 20–549 trials. We used Bayesian Q-learning models to estimate the time course of the probability distribution of action values and tested if they better explain the behaviors of rats than standard Q-learning models that estimate only the mean of action values. Model comparison by cross-validation revealed that a Bayesian Q-learning model with an asymmetric update for reward and non-reward outcomes fit the choice time course of the rats best. In the action-choice equation of the Bayesian Q-learning model, the estimated coefficient for the variance of action value was positive, meaning that rats were uncertainty seeking. Further analysis of the Bayesian Q-learning model suggested that the uncertainty facilitated the effective learning rate. These results suggest that the rats consider uncertainty in action-value estimation and that they have an uncertainty-seeking action policy and uncertainty-dependent modulation of the effective learning rate. PMID:22487046
NASA Astrophysics Data System (ADS)
Pilz, Tobias; Francke, Till; Bronstert, Axel
2016-04-01
Until today a large number of competing computer models has been developed to understand hydrological processes and to simulate and predict streamflow dynamics of rivers. This is primarily the result of a lack of a unified theory in catchment hydrology due to insufficient process understanding and uncertainties related to model development and application. Therefore, the goal of this study is to analyze the uncertainty structure of a process-based hydrological catchment model employing a multiple hypotheses approach. The study focuses on three major problems that have received only little attention in previous investigations. First, to estimate the impact of model structural uncertainty by employing several alternative representations for each simulated process. Second, explore the influence of landscape discretization and parameterization from multiple datasets and user decisions. Third, employ several numerical solvers for the integration of the governing ordinary differential equations to study the effect on simulation results. The generated ensemble of model hypotheses is then analyzed and the three sources of uncertainty compared against each other. To ensure consistency and comparability all model structures and numerical solvers are implemented within a single simulation environment. First results suggest that the selection of a sophisticated numerical solver for the differential equations positively affects simulation outcomes. However, already some simple and easy to implement explicit methods perform surprisingly well and need less computational efforts than more advanced but time consuming implicit techniques. There is general evidence that ambiguous and subjective user decisions form a major source of uncertainty and can greatly influence model development and application at all stages.
A Simple Approach to Account for Climate Model Interdependence in Multi-Model Ensembles
NASA Astrophysics Data System (ADS)
Herger, N.; Abramowitz, G.; Angelil, O. M.; Knutti, R.; Sanderson, B.
2016-12-01
Multi-model ensembles are an indispensable tool for future climate projection and its uncertainty quantification. Ensembles containing multiple climate models generally have increased skill, consistency and reliability. Due to the lack of agreed-on alternatives, most scientists use the equally-weighted multi-model mean as they subscribe to model democracy ("one model, one vote").Different research groups are known to share sections of code, parameterizations in their model, literature, or even whole model components. Therefore, individual model runs do not represent truly independent estimates. Ignoring this dependence structure might lead to a false model consensus, wrong estimation of uncertainty and effective number of independent models.Here, we present a way to partially address this problem by selecting a subset of CMIP5 model runs so that its climatological mean minimizes the RMSE compared to a given observation product. Due to the cancelling out of errors, regional biases in the ensemble mean are reduced significantly.Using a model-as-truth experiment we demonstrate that those regional biases persist into the future and we are not fitting noise, thus providing improved observationally-constrained projections of the 21st century. The optimally selected ensemble shows significantly higher global mean surface temperature projections than the original ensemble, where all the model runs are considered. Moreover, the spread is decreased well beyond that expected from the decreased ensemble size.Several previous studies have recommended an ensemble selection approach based on performance ranking of the model runs. Here, we show that this approach can perform even worse than randomly selecting ensemble members and can thus be harmful. We suggest that accounting for interdependence in the ensemble selection process is a necessary step for robust projections for use in impact assessments, adaptation and mitigation of climate change.
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Smyth, E.; Small, E. E.
2017-12-01
The spatial distribution of snow water equivalent (SWE) is not sufficiently monitored with either remotely sensed or ground-based observations for water resources management. Recent applications of airborne Lidar have yielded basin-wide mapping of SWE when combined with a snow density model. However, in the absence of snow density observations, the uncertainty in these SWE maps is dominated by uncertainty in modeled snow density rather than in Lidar measurement of snow depth. Available observations tend to have a bias in physiographic regime (e.g., flat open areas) and are often insufficient in number to support testing of models across a range of conditions. Thus, there is a need for targeted sampling strategies and controlled model experiments to understand where and why different snow density models diverge. This will enable identification of robust model structures that represent dominant processes controlling snow densification, in support of basin-scale estimation of SWE with remotely-sensed snow depth datasets. The NASA SnowEx mission is a unique opportunity to evaluate sampling strategies of snow density and to quantify and reduce uncertainty in modeled snow density. In this presentation, we present initial field data analyses and modeling results over the Colorado SnowEx domain in the 2016-2017 winter campaign. We detail a framework for spatially mapping the uncertainty in snowpack density, as represented across multiple models. Leveraging the modular SUMMA model, we construct a series of physically-based models to assess systematically the importance of specific process representations to snow density estimates. We will show how models and snow pit observations characterize snow density variations with forest cover in the SnowEx domains. Finally, we will use the spatial maps of density uncertainty to evaluate the selected locations of snow pits, thereby assessing the adequacy of the sampling strategy for targeting uncertainty in modeled snow density.
A parallel optimization method for product configuration and supplier selection based on interval
NASA Astrophysics Data System (ADS)
Zheng, Jian; Zhang, Meng; Li, Guoxi
2017-06-01
In the process of design and manufacturing, product configuration is an important way of product development, and supplier selection is an essential component of supply chain management. To reduce the risk of procurement and maximize the profits of enterprises, this study proposes to combine the product configuration and supplier selection, and express the multiple uncertainties as interval numbers. An integrated optimization model of interval product configuration and supplier selection was established, and NSGA-II was put forward to locate the Pareto-optimal solutions to the interval multiobjective optimization model.
Model Uncertainty and Bayesian Model Averaged Benchmark Dose Estimation for Continuous Data
The benchmark dose (BMD) approach has gained acceptance as a valuable risk assessment tool, but risk assessors still face significant challenges associated with selecting an appropriate BMD/BMDL estimate from the results of a set of acceptable dose-response models. Current approa...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef
Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesianmore » inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.« less
Wind Plant Power Optimization and Control under Uncertainty
NASA Astrophysics Data System (ADS)
Jha, Pankaj; Ulker, Demet; Hutchings, Kyle; Oxley, Gregory
2017-11-01
The development of optimized cooperative wind plant control involves the coordinated operation of individual turbines co-located within a wind plant to improve the overall power production. This is typically achieved by manipulating the trajectory and intensity of wake interactions between nearby turbines, thereby reducing wake losses. However, there are various types of uncertainties involved, such as turbulent inflow and microscale and turbine model input parameters. In a recent NREL-Envision collaboration, a controller that performs wake steering was designed and implemented for the Longyuan Rudong offshore wind plant in Jiangsu, China. The Rudong site contains 25 Envision EN136-4 MW turbines, of which a subset was selected for the field test campaign consisting of the front two rows for the northeasterly wind direction. In the first row, a turbine was selected as the reference turbine, providing comparison power data, while another was selected as the controlled turbine. This controlled turbine wakes three different turbines in the second row depending on the wind direction. A yaw misalignment strategy was designed using Envision's GWCFD, a multi-fidelity plant-scale CFD tool based on SOWFA with a generalized actuator disc (GAD) turbine model, which, in turn, was used to tune NREL's FLORIS model used for wake steering and yaw control optimization. The presentation will account for some associated uncertainties, such as those in atmospheric turbulence and wake profile.
Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling
Pastore, Giovanni; Swiler, L. P.; Hales, Jason D.; ...
2014-10-12
The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code and a recently implemented physics-based model for the coupled fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information from the open literature. The study leads to an initial quantitative assessment of the uncertaintymore » in fission gas behavior modeling with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.« less
NASA Astrophysics Data System (ADS)
Hernández-López, Mario R.; Romero-Cuéllar, Jonathan; Camilo Múnera-Estrada, Juan; Coccia, Gabriele; Francés, Félix
2017-04-01
It is noticeably important to emphasize the role of uncertainty particularly when the model forecasts are used to support decision-making and water management. This research compares two approaches for the evaluation of the predictive uncertainty in hydrological modeling. First approach is the Bayesian Joint Inference of hydrological and error models. Second approach is carried out through the Model Conditional Processor using the Truncated Normal Distribution in the transformed space. This comparison is focused on the predictive distribution reliability. The case study is applied to two basins included in the Model Parameter Estimation Experiment (MOPEX). These two basins, which have different hydrological complexity, are the French Broad River (North Carolina) and the Guadalupe River (Texas). The results indicate that generally, both approaches are able to provide similar predictive performances. However, the differences between them can arise in basins with complex hydrology (e.g. ephemeral basins). This is because obtained results with Bayesian Joint Inference are strongly dependent on the suitability of the hypothesized error model. Similarly, the results in the case of the Model Conditional Processor are mainly influenced by the selected model of tails or even by the selected full probability distribution model of the data in the real space, and by the definition of the Truncated Normal Distribution in the transformed space. In summary, the different hypotheses that the modeler choose on each of the two approaches are the main cause of the different results. This research also explores a proper combination of both methodologies which could be useful to achieve less biased hydrological parameter estimation. For this approach, firstly the predictive distribution is obtained through the Model Conditional Processor. Secondly, this predictive distribution is used to derive the corresponding additive error model which is employed for the hydrological parameter estimation with the Bayesian Joint Inference methodology.
2009-01-01
selection and uncertainty sampling signif- icantly. Index Terms: Transcription, labeling, submodularity, submod- ular selection, active learning , sequence...name of batch active learning , where a subset of data that is most informative and represen- tative of the whole is selected for labeling. Often...representative subset. Note that our Fisher ker- nel is over an unsupervised generative model, which enables us to bootstrap our active learning approach
NASA Astrophysics Data System (ADS)
Fijani, E.; Chitsazan, N.; Nadiri, A.; Tsai, F. T.; Asghari Moghaddam, A.
2012-12-01
Artificial Neural Networks (ANNs) have been widely used to estimate concentration of chemicals in groundwater systems. However, estimation uncertainty is rarely discussed in the literature. Uncertainty in ANN output stems from three sources: ANN inputs, ANN parameters (weights and biases), and ANN structures. Uncertainty in ANN inputs may come from input data selection and/or input data error. ANN parameters are naturally uncertain because they are maximum-likelihood estimated. ANN structure is also uncertain because there is no unique ANN model given a specific case. Therefore, multiple plausible AI models are generally resulted for a study. One might ask why good models have to be ignored in favor of the best model in traditional estimation. What is the ANN estimation variance? How do the variances from different ANN models accumulate to the total estimation variance? To answer these questions we propose a Hierarchical Bayesian Model Averaging (HBMA) framework. Instead of choosing one ANN model (the best ANN model) for estimation, HBMA averages outputs of all plausible ANN models. The model weights are based on the evidence of data. Therefore, the HBMA avoids overconfidence on the single best ANN model. In addition, HBMA is able to analyze uncertainty propagation through aggregation of ANN models in a hierarchy framework. This method is applied for estimation of fluoride concentration in the Poldasht plain and the Bazargan plain in Iran. Unusually high fluoride concentration in the Poldasht and Bazargan plains has caused negative effects on the public health. Management of this anomaly requires estimation of fluoride concentration distribution in the area. The results show that the HBMA provides a knowledge-decision-based framework that facilitates analyzing and quantifying ANN estimation uncertainties from different sources. In addition HBMA allows comparative evaluation of the realizations for each source of uncertainty by segregating the uncertainty sources in a hierarchical framework. Fluoride concentration estimation using the HBMA method shows better agreement to the observation data in the test step because they are not based on a single model with a non-dominate weights.
Casajus, Nicolas; Périé, Catherine; Logan, Travis; Lambert, Marie-Claude; de Blois, Sylvie; Berteaux, Dominique
2016-01-01
An impressive number of new climate change scenarios have recently become available to assess the ecological impacts of climate change. Among these impacts, shifts in species range analyzed with species distribution models are the most widely studied. Whereas it is widely recognized that the uncertainty in future climatic conditions must be taken into account in impact studies, many assessments of species range shifts still rely on just a few climate change scenarios, often selected arbitrarily. We describe a method to select objectively a subset of climate change scenarios among a large ensemble of available ones. Our k-means clustering approach reduces the number of climate change scenarios needed to project species distributions, while retaining the coverage of uncertainty in future climate conditions. We first show, for three biologically-relevant climatic variables, that a reduced number of six climate change scenarios generates average climatic conditions very close to those obtained from a set of 27 scenarios available before reduction. A case study on potential gains and losses of habitat by three northeastern American tree species shows that potential future species distributions projected from the selected six climate change scenarios are very similar to those obtained from the full set of 27, although with some spatial discrepancies at the edges of species distributions. In contrast, projections based on just a few climate models vary strongly according to the initial choice of climate models. We give clear guidance on how to reduce the number of climate change scenarios while retaining the central tendencies and coverage of uncertainty in future climatic conditions. This should be particularly useful during future climate change impact studies as more than twice as many climate models were reported in the fifth assessment report of IPCC compared to the previous one. PMID:27015274
Casajus, Nicolas; Périé, Catherine; Logan, Travis; Lambert, Marie-Claude; de Blois, Sylvie; Berteaux, Dominique
2016-01-01
An impressive number of new climate change scenarios have recently become available to assess the ecological impacts of climate change. Among these impacts, shifts in species range analyzed with species distribution models are the most widely studied. Whereas it is widely recognized that the uncertainty in future climatic conditions must be taken into account in impact studies, many assessments of species range shifts still rely on just a few climate change scenarios, often selected arbitrarily. We describe a method to select objectively a subset of climate change scenarios among a large ensemble of available ones. Our k-means clustering approach reduces the number of climate change scenarios needed to project species distributions, while retaining the coverage of uncertainty in future climate conditions. We first show, for three biologically-relevant climatic variables, that a reduced number of six climate change scenarios generates average climatic conditions very close to those obtained from a set of 27 scenarios available before reduction. A case study on potential gains and losses of habitat by three northeastern American tree species shows that potential future species distributions projected from the selected six climate change scenarios are very similar to those obtained from the full set of 27, although with some spatial discrepancies at the edges of species distributions. In contrast, projections based on just a few climate models vary strongly according to the initial choice of climate models. We give clear guidance on how to reduce the number of climate change scenarios while retaining the central tendencies and coverage of uncertainty in future climatic conditions. This should be particularly useful during future climate change impact studies as more than twice as many climate models were reported in the fifth assessment report of IPCC compared to the previous one.
How to find what you don't know: Visualising variability in 3D geological models
NASA Astrophysics Data System (ADS)
Lindsay, Mark; Wellmann, Florian; Jessell, Mark; Ailleres, Laurent
2014-05-01
Uncertainties in input data can have compounding effects on the predictive reliability of three-dimensional (3D) geological models. Resource exploration, tectonic studies and environmental modelling can be compromised by using 3D models that misrepresent the target geology, and drilling campaigns that attempt to intersect particular geological units guided by 3D models are at risk of failure if the exploration geologist is unaware of inherent uncertainties. In addition, the visual inspection of 3D models is often the first contact decision makers have with the geology, thus visually communicating the presence and magnitude of uncertainties contained within geological 3D models is critical. Unless uncertainties are presented early in the relationship between decision maker and model, the model will be considered more truthful than the uncertainties allow with each subsequent viewing. We present a selection of visualisation techniques that provide the viewer with an insight to the location and amount of uncertainty contained within a model, and the geological characteristics which are most affected. A model of the Gippsland Basin, southeastern Australia is used as a case study to demonstrate the concepts of information entropy, stratigraphic variability and geodiversity. Central to the techniques shown here is the creation of a model suite, performed by creating similar (but not the same) version of the original model through perturbation of the input data. Specifically, structural data in the form of strike and dip measurements is perturbed in the creation of the model suite. The visualisation techniques presented are: (i) information entropy; (ii) stratigraphic variability and (iii) geodiversity. Information entropy is used to analyse uncertainty in a spatial context, combining the empirical probability distributions of multiple outcomes with a single quantitative measure. Stratigraphic variability displays the number of possible lithologies that may exist at a given point within the model volume. Geodiversity analyses various model characteristics (or 'geodiveristy metrics'), including the depth, volume of unit, the curvature of an interface, the geological complexity of a contact and the contact relationships units have with each other. Principal component analysis, a multivariate statistical technique, is used to simultaneously examine each of the geodiveristy metrics to determine the boundaries of model space, and identify which metrics contribute most to model uncertainty. The combination of information entropy, stratigraphic variability and geodiversity analysis provides a descriptive and thorough representation of uncertainty with effective visualisation techniques that clearly communicate the geological uncertainty contained within the geological model.
Monosov, Ilya E.; Hikosaka, Okihide
2014-01-01
Natural environments are uncertain. Uncertainty of emotional outcomes can induce anxiety and raise vigilance, promote and signal the opportunity for learning, modulate economic choice, and regulate risk seeking. Here we demonstrate that a subset of neurons in the anterodorsal region of the primate septum (ADS) are primarily devoted to processing uncertainty in a highly specific manner. Those neurons were selectively activated by visual cues indicating probabilistic delivery of reward (e.g. 25%, 50%, 75% reward) and did not respond to cues indicating certain outcomes (0% and 100% reward). The average ADS uncertainty response was graded with the magnitude of reward uncertainty, and selectively signaled uncertainty about rewards rather than punishments. The selective and graded information about reward uncertainty encoded by many neurons in the ADS may underlie uncertainty-modulation of value- and sensorimotor- related areas to regulate goal-directed behavior. PMID:23666181
Porfirio, Luciana L.; Harris, Rebecca M. B.; Lefroy, Edward C.; Hugh, Sonia; Gould, Susan F.; Lee, Greg; Bindoff, Nathaniel L.; Mackey, Brendan
2014-01-01
Choice of variables, climate models and emissions scenarios all influence the results of species distribution models under future climatic conditions. However, an overview of applied studies suggests that the uncertainty associated with these factors is not always appropriately incorporated or even considered. We examine the effects of choice of variables, climate models and emissions scenarios can have on future species distribution models using two endangered species: one a short-lived invertebrate species (Ptunarra Brown Butterfly), and the other a long-lived paleo-endemic tree species (King Billy Pine). We show the range in projected distributions that result from different variable selection, climate models and emissions scenarios. The extent to which results are affected by these choices depends on the characteristics of the species modelled, but they all have the potential to substantially alter conclusions about the impacts of climate change. We discuss implications for conservation planning and management, and provide recommendations to conservation practitioners on variable selection and accommodating uncertainty when using future climate projections in species distribution models. PMID:25420020
Numerical, mathematical models of water and chemical movement in soils are used as decision aids for determining soil screening levels (SSLs) of radionuclides in the unsaturated zone. Many models require extensive input parameters which include uncertainty due to soil variabil...
NASA Astrophysics Data System (ADS)
Shi, X.; Zhang, G.
2013-12-01
Because of the extensive computational burden, parametric uncertainty analyses are rarely conducted for geological carbon sequestration (GCS) process based multi-phase models. The difficulty of predictive uncertainty analysis for the CO2 plume migration in realistic GCS models is not only due to the spatial distribution of the caprock and reservoir (i.e. heterogeneous model parameters), but also because the GCS optimization estimation problem has multiple local minima due to the complex nonlinear multi-phase (gas and aqueous), and multi-component (water, CO2, salt) transport equations. The geological model built by Doughty and Pruess (2004) for the Frio pilot site (Texas) was selected and assumed to represent the 'true' system, which was composed of seven different facies (geological units) distributed among 10 layers. We chose to calibrate the permeabilities of these facies. Pressure and gas saturation values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. Each simulation of the model lasts about 2 hours. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid stochastic collocation method. This surrogate response surface global optimization algorithm is firstly used to calibrate the model parameters, then prediction uncertainty of the CO2 plume position is quantified due to the propagation from parametric uncertainty in the numerical experiments, which is also compared to the actual plume from the 'true' model. Results prove that the approach is computationally efficient for multi-modal optimization and prediction uncertainty quantification for computationally expensive simulation models. Both our inverse methodology and findings can be broadly applicable to GCS in heterogeneous storage formations.
Yu, Yajuan; Wang, Xiang; Wang, Dong; Huang, Kai; Wang, Lijing; Bao, Liying; Wu, Feng
2012-08-30
An environmental impact assessment model for secondary batteries under uncertainty is proposed, which is a combination of the life cycle assessment (LCA), Eco-indicator 99 system and Monte Carlo simulation (MCS). The LCA can describe the environmental impact mechanism of secondary batteries, whereas the cycle performance was simulated through MCS. The composite LCA-MCS model was then carried out to estimate the environmental impact of two kinds of experimental batteries. Under this kind of standard assessment system, a comparison between different batteries could be accomplished. The following results were found: (1) among the two selected batteries, the environmental impact of the Li-ion battery is lower than the nickel-metal hydride (Ni-MH) battery, especially with regards to resource consumption and (2) the lithium ion (Li-ion) battery is less sensitive to cycle uncertainty, its environmental impact fluctuations are small when compared with the selected Ni-MH battery and it is more environmentally friendly. The assessment methodology and model proposed in this paper can also be used for any other secondary batteries and they can be helpful in the development of environmentally friendly secondary batteries. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.
Assessment of uncertainties of the models used in thermal-hydraulic computer codes
NASA Astrophysics Data System (ADS)
Gricay, A. S.; Migrov, Yu. A.
2015-09-01
The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.
NASA Astrophysics Data System (ADS)
Terando, A. J.; Reich, B. J.; Pacifici, K.
2013-12-01
Fire is an important disturbance process in many coupled natural-human systems. Changes in the frequency and severity of fires due to anthropogenic climate change could have significant costs to society and the plant and animal communities that are adapted to a particular fire regime Planning for these changes requires a robust model of the relationship between climate and fire that accounts for multiple sources of uncertainty that are present when simulating ecological and climatological processes. Here we model how anthropogenic climate change could affect the wildfire regime for a region in the Southeast US whose natural ecosystems are dependent on frequent, low-intensity fires while humans are at risk from large catastrophic fires. We develop a modeling framework that incorporates three major sources of uncertainty: (1) uncertainty in the ecological drivers of expected monthly area burned, (2) uncertainty in the environmental drivers influencing the probability of an extreme fire event, and (3) structural uncertainty in different downscaled climate models. In addition we use two policy-relevant emission scenarios (climate stabilization and 'business-as-usual') to characterize the uncertainty in future greenhouse gas forcings. We use a Bayesian framework to incorporate different sources of uncertainty including simulation of predictive errors and Stochastic Search Variable Selection. Our results suggest that although the mean process remains stationary, the probability of extreme fires declines through time, owing to the persistence of high atmospheric moisture content during the peak fire season that dampens the effect of increasing temperatures. Including multiple sources of uncertainty leads to wide prediction intervals, but is potentially more useful for decision-makers that will require adaptation strategies that are robust to rapid but uncertain climate and ecological change.
Tan, Can Ozan; Bullock, Daniel
2008-10-01
Recently, dopamine (DA) neurons of the substantia nigra pars compacta (SNc) were found to exhibit sustained responses related to reward uncertainty, in addition to the phasic responses related to reward-prediction errors (RPEs). Thus, cue-dependent anticipations of the timing, magnitude, and uncertainty of rewards are learned and reflected in components of DA signals. Here we simulate a local circuit model to show how learned uncertainty responses are generated, along with phasic RPE responses, on single trials. Both types of simulated DA responses exhibit the empirically observed dependencies on conditional probability, expected value of reward, and time since onset of the reward-predicting cue. The model's three major pathways compute expected values of cues, timed predictions of reward magnitudes, and uncertainties associated with these predictions. The first two pathways' computations refine those modeled by Brown et al. (1999). The third, newly modeled, pathway involves medium spiny projection neurons (MSPNs) of the striatal matrix, whose axons corelease GABA and substance P, both at synapses with GABAergic neurons in the substantia nigra pars reticulata (SNr) and with distal dendrites (in SNr) of DA neurons whose somas are located in ventral SNc. Corelease enables efficient computation of uncertainty responses that are a nonmonotonic function of the conditional probability of reward, and variability in striatal cholinergic transmission can explain observed individual differences in the amplitudes of uncertainty responses. The involvement of matricial MSPNs and cholinergic transmission within the striatum implies a relation between uncertainty in cue-reward contingencies and action-selection functions of the basal ganglia.
NASA Astrophysics Data System (ADS)
Elfgen, S.; Franck, D.; Hameyer, K.
2018-04-01
Magnetic measurements are indispensable for the characterization of soft magnetic material used e.g. in electrical machines. Characteristic values are used as quality control during production and for the parametrization of material models. Uncertainties and errors in the measurements are reflected directly in the parameters of the material models. This can result in over-dimensioning and inaccuracies in simulations for the design of electrical machines. Therefore, existing influencing factors in the characterization of soft magnetic materials are named and their resulting uncertainties contributions studied. The analysis of the resulting uncertainty contributions can serve the operator as additional selection criteria for different measuring sensors. The investigation is performed for measurements within and outside the currently prescribed standard, using a Single sheet tester and its impact on the identification of iron loss parameter is studied.
NASA Astrophysics Data System (ADS)
Ricciuto, D. M.; Mei, R.; Mao, J.; Hoffman, F. M.; Kumar, J.
2015-12-01
Uncertainties in land parameters could have important impacts on simulated water and energy fluxes and land surface states, which will consequently affect atmospheric and biogeochemical processes. Therefore, quantification of such parameter uncertainties using a land surface model is the first step towards better understanding of predictive uncertainty in Earth system models. In this study, we applied a random-sampling, high-dimensional model representation (RS-HDMR) method to analyze the sensitivity of simulated photosynthesis, surface energy fluxes and surface hydrological components to selected land parameters in version 4.5 of the Community Land Model (CLM4.5). Because of the large computational expense of conducting ensembles of global gridded model simulations, we used the results of a previous cluster analysis to select one thousand representative land grid cells for simulation. Plant functional type (PFT)-specific uniform prior ranges for land parameters were determined using expert opinion and literature survey, and samples were generated with a quasi-Monte Carlo approach-Sobol sequence. Preliminary analysis of 1024 simulations suggested that four PFT-dependent parameters (including slope of the conductance-photosynthesis relationship, specific leaf area at canopy top, leaf C:N ratio and fraction of leaf N in RuBisco) are the dominant sensitive parameters for photosynthesis, surface energy and water fluxes across most PFTs, but with varying importance rankings. On the other hand, for surface ans sub-surface runoff, PFT-independent parameters, such as the depth-dependent decay factors for runoff, play more important roles than the previous four PFT-dependent parameters. Further analysis by conditioning the results on different seasons and years are being conducted to provide guidance on how climate variability and change might affect such sensitivity. This is the first step toward coupled simulations including biogeochemical processes, atmospheric processes or both to determine the full range of sensitivity of Earth system modeling to land-surface parameters. This can facilitate sampling strategies in measurement campaigns targeted at reduction of climate modeling uncertainties and can also provide guidance on land parameter calibration for simulation optimization.
Gorguluarslan, Recep M; Choi, Seung-Kyum; Saldana, Christopher J
2017-07-01
A methodology is proposed for uncertainty quantification and validation to accurately predict the mechanical response of lattice structures used in the design of scaffolds. Effective structural properties of the scaffolds are characterized using a developed multi-level stochastic upscaling process that propagates the quantified uncertainties at strut level to the lattice structure level. To obtain realistic simulation models for the stochastic upscaling process and minimize the experimental cost, high-resolution finite element models of individual struts were reconstructed from the micro-CT scan images of lattice structures which are fabricated by selective laser melting. The upscaling method facilitates the process of determining homogenized strut properties to reduce the computational cost of the detailed simulation model for the scaffold. Bayesian Information Criterion is utilized to quantify the uncertainties with parametric distributions based on the statistical data obtained from the reconstructed strut models. A systematic validation approach that can minimize the experimental cost is also developed to assess the predictive capability of the stochastic upscaling method used at the strut level and lattice structure level. In comparison with physical compression test results, the proposed methodology of linking the uncertainty quantification with the multi-level stochastic upscaling method enabled an accurate prediction of the elastic behavior of the lattice structure with minimal experimental cost by accounting for the uncertainties induced by the additive manufacturing process. Copyright © 2017 Elsevier Ltd. All rights reserved.
2011-03-09
task stability, technology application certainty, risk, and transaction-specific investments impact the selection of the optimal mode of governance...technology application certainty, risk, and transaction-specific investments impact the selection of the optimal mode of governance. Our model views...U.S. Defense Industry. The 1990s were a perfect storm of technological change, consolidation , budget downturns, environmental uncertainty, and the
Funamizu, Akihiro; Ito, Makoto; Doya, Kenji; Kanzaki, Ryohei; Takahashi, Hirokazu
2012-04-01
The estimation of reward outcomes for action candidates is essential for decision making. In this study, we examined whether and how the uncertainty in reward outcome estimation affects the action choice and learning rate. We designed a choice task in which rats selected either the left-poking or right-poking hole and received a reward of a food pellet stochastically. The reward probabilities of the left and right holes were chosen from six settings (high, 100% vs. 66%; mid, 66% vs. 33%; low, 33% vs. 0% for the left vs. right holes, and the opposites) in every 20-549 trials. We used Bayesian Q-learning models to estimate the time course of the probability distribution of action values and tested if they better explain the behaviors of rats than standard Q-learning models that estimate only the mean of action values. Model comparison by cross-validation revealed that a Bayesian Q-learning model with an asymmetric update for reward and non-reward outcomes fit the choice time course of the rats best. In the action-choice equation of the Bayesian Q-learning model, the estimated coefficient for the variance of action value was positive, meaning that rats were uncertainty seeking. Further analysis of the Bayesian Q-learning model suggested that the uncertainty facilitated the effective learning rate. These results suggest that the rats consider uncertainty in action-value estimation and that they have an uncertainty-seeking action policy and uncertainty-dependent modulation of the effective learning rate. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Methods for handling uncertainty within pharmaceutical funding decisions
NASA Astrophysics Data System (ADS)
Stevenson, Matt; Tappenden, Paul; Squires, Hazel
2014-01-01
This article provides a position statement regarding decision making under uncertainty within the economic evaluation of pharmaceuticals, with a particular focus upon the National Institute for Health and Clinical Excellence context within England and Wales. This area is of importance as funding agencies have a finite budget from which to purchase a selection of competing health care interventions. The objective function generally used is that of maximising societal health with an explicit acknowledgement that there will be opportunity costs associated with purchasing a particular intervention. Three components of uncertainty are discussed within a pharmaceutical funding perspective: methodological uncertainty, parameter uncertainty and structural uncertainty, alongside a discussion of challenges that are particularly pertinent to health economic evaluation. The discipline has focused primarily on handling methodological and parameter uncertainty and a clear reference case has been developed for consistency across evaluations. However, uncertainties still remain. Less attention has been given to methods for handling structural uncertainty. The lack of adequate methods to explicitly incorporate this aspect of model development may result in the true uncertainty surrounding health care investment decisions being underestimated. Research in this area is ongoing as we review.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom
2011-01-01
The need for a defendable and systematic uncertainty and sensitivity approach that conforms to the Code Scaling, Applicability, and Uncertainty (CSAU) process, and that could be used for a wide variety of software codes, was defined in 2008. The GRS (Gesellschaft für Anlagen und Reaktorsicherheit) company of Germany has developed one type of CSAU approach that is particularly well suited for legacy coupled core analysis codes, and a trial version of their commercial software product SUSA (Software for Uncertainty and Sensitivity Analyses) was acquired on May 12, 2010. This report summarized the results of the initial investigations performed with SUSA,more » utilizing a typical High Temperature Reactor benchmark (the IAEA CRP-5 PBMR 400MW Exercise 2) and the PEBBED-THERMIX suite of codes. The following steps were performed as part of the uncertainty and sensitivity analysis: 1. Eight PEBBED-THERMIX model input parameters were selected for inclusion in the uncertainty study: the total reactor power, inlet gas temperature, decay heat, and the specific heat capability and thermal conductivity of the fuel, pebble bed and reflector graphite. 2. The input parameters variations and probability density functions were specified, and a total of 800 PEBBED-THERMIX model calculations were performed, divided into 4 sets of 100 and 2 sets of 200 Steady State and Depressurized Loss of Forced Cooling (DLOFC) transient calculations each. 3. The steady state and DLOFC maximum fuel temperature, as well as the daily pebble fuel load rate data, were supplied to SUSA as model output parameters of interest. The 6 data sets were statistically analyzed to determine the 5% and 95% percentile values for each of the 3 output parameters with a 95% confidence level, and typical statistical indictors were also generated (e.g. Kendall, Pearson and Spearman coefficients). 4. A SUSA sensitivity study was performed to obtain correlation data between the input and output parameters, and to identify the primary contributors to the output data uncertainties. It was found that the uncertainties in the decay heat, pebble bed and reflector thermal conductivities were responsible for the bulk of the propagated uncertainty in the DLOFC maximum fuel temperature. It was also determined that the two standard deviation (2s) uncertainty on the maximum fuel temperature was between ±58oC (3.6%) and ±76oC (4.7%) on a mean value of 1604 oC. These values mostly depended on the selection of the distributions types, and not on the number of model calculations above the required Wilks criteria (a (95%,95%) statement would usually require 93 model runs).« less
NASA Astrophysics Data System (ADS)
Smith, B. D.; White, J.; Kress, W. H.; Clark, B. R.; Barlow, J.
2016-12-01
Hydrogeophysical surveys have become an integral part of understanding hydrogeological frameworks used in groundwater models. Regional models cover a large area where water well data is, at best, scattered and irregular. Since budgets are finite, priorities must be assigned to select optimal areas for geophysical surveys. For airborne electromagnetic (AEM) geophysical surveys, optimization of mapping depth and line spacing needs to take in account the objectives of the groundwater models. The approach discussed here uses a first-order, second-moment (FOSM) uncertainty analyses which assumes an approximate linear relation between model parameters and observations. This assumption allows FOSM analyses to be applied to estimate the value of increased parameter knowledge to reduce forecast uncertainty. FOSM is used to facilitate optimization of yet-to-be-completed geophysical surveying to reduce model forecast uncertainty. The main objective of geophysical surveying is assumed to estimate values and spatial variation in hydrologic parameters (i.e. hydraulic conductivity) as well as map lower permeability layers that influence the spatial distribution of recharge flux. The proposed data worth analysis was applied to Mississippi Embayment Regional Aquifer Study (MERAS) which is being updated. The objective of MERAS is to assess the ground-water availability (status and trends) of the Mississippi embayment aquifer system. The study area covers portions of eight states including Alabama, Arkansas, Illinois, Kentucky, Louisiana, Mississippi, Missouri, and Tennessee. The active model grid covers approximately 70,000 square miles, and incorporates some 6,000 miles of major rivers and over 100,000 water wells. In the FOSM analysis, a dense network of pilot points was used to capture uncertainty in hydraulic conductivity and recharge. To simulate the effect of AEM flight lines, the prior uncertainty for hydraulic conductivity and recharge pilots along potential flight lines was reduced. The FOSM forecast uncertainty estimates were then recalculated and compared to the base forecast uncertainty estimates. The resulting reduction in forecast uncertainty is a measure of the effect on the model from the AEM survey. Iterations through this process, results in optimization of flight line location.
Embedded Model Error Representation and Propagation in Climate Models
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.
2017-12-01
Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.
Gaussian Process Model for Antarctic Surface Mass Balance and Ice Core Site Selection
NASA Astrophysics Data System (ADS)
White, P. A.; Reese, S.; Christensen, W. F.; Rupper, S.
2017-12-01
Surface mass balance (SMB) is an important factor in the estimation of sea level change, and data are collected to estimate models for prediction of SMB on the Antarctic ice sheet. Using Favier et al.'s (2013) quality-controlled aggregate data set of SMB field measurements, a fully Bayesian spatial model is posed to estimate Antarctic SMB and propose new field measurement locations. Utilizing Nearest-Neighbor Gaussian process (NNGP) models, SMB is estimated over the Antarctic ice sheet. An Antarctic SMB map is rendered using this model and is compared with previous estimates. A prediction uncertainty map is created to identify regions of high SMB uncertainty. The model estimates net SMB to be 2173 Gton yr-1 with 95% credible interval (2021,2331) Gton yr-1. On average, these results suggest lower Antarctic SMB and higher uncertainty than previously purported [Vaughan et al. (1999); Van de Berg et al. (2006); Arthern, Winebrenner and Vaughan (2006); Bromwich et al. (2004); Lenaerts et al. (2012)], even though this model utilizes significantly more observations than previous models. Using the Gaussian process' uncertainty and model parameters, we propose 15 new measurement locations for field study utilizing a maximin space-filling, error-minimizing design; these potential measurements are identied to minimize future estimation uncertainty. Using currently accepted Antarctic mass balance estimates and our SMB estimate, we estimate net mass loss [Shepherd et al. (2012); Jacob et al. (2012)]. Furthermore, we discuss modeling details for both space-time data and combining field measurement data with output from mathematical models using the NNGP framework.
Selection, calibration, and validation of models of tumor growth.
Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C
2016-11-01
This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory animals while demonstrating successful implementations of OPAL.
Bayes factors and multimodel inference
Link, W.A.; Barker, R.J.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.
2009-01-01
Multimodel inference has two main themes: model selection, and model averaging. Model averaging is a means of making inference conditional on a model set, rather than on a selected model, allowing formal recognition of the uncertainty associated with model choice. The Bayesian paradigm provides a natural framework for model averaging, and provides a context for evaluation of the commonly used AIC weights. We review Bayesian multimodel inference, noting the importance of Bayes factors. Noting the sensitivity of Bayes factors to the choice of priors on parameters, we define and propose nonpreferential priors as offering a reasonable standard for objective multimodel inference.
An adaptive response surface method for crashworthiness optimization
NASA Astrophysics Data System (ADS)
Shi, Lei; Yang, Ren-Jye; Zhu, Ping
2013-11-01
Response surface-based design optimization has been commonly used for optimizing large-scale design problems in the automotive industry. However, most response surface models are built by a limited number of design points without considering data uncertainty. In addition, the selection of a response surface in the literature is often arbitrary. This article uses a Bayesian metric to systematically select the best available response surface among several candidates in a library while considering data uncertainty. An adaptive, efficient response surface strategy, which minimizes the number of computationally intensive simulations, was developed for design optimization of large-scale complex problems. This methodology was demonstrated by a crashworthiness optimization example.
Eicher, Bernhard
2016-10-01
Hospitals are responsible for a remarkable part of the annual increase in healthcare expenditure. This article examines one of the major cost drivers, the expenditure for investment in hospital assets. The study, conducted in Switzerland, identifies factors that influence hospitals' investment decisions. A suggestion on how to categorize asset investment models is presented based on the life cycle of an asset, and its influencing factors defined based on transaction cost economics. The influence of five factors (human asset specificity, physical asset specificity, uncertainty, bargaining power, and privacy of ownership) on the selection of an asset investment model is examined using a two-step fuzzy-set Qualitative Comparative Analysis. The research shows that outsourcing-oriented asset investment models are particularly favored in the presence of two combinations of influencing factors: First, if technological uncertainty is high and both human asset specificity and bargaining power of a hospital are low. Second, if assets are very specific, technological uncertainty is high and there is a private hospital with low bargaining power, outsourcing-oriented asset investment models are favored too. Using Qualitative Comparative Analysis, it can be demonstrated that investment decisions of hospitals do not depend on isolated influencing factors but on a combination of factors. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Wifall, Tim; Hazeltine, Eliot; Toby Mordkoff, J
2016-07-01
Hick/Hyman Law describes one of the core phenomena in the study of human information processing: mean response time is a linear function of average uncertainty. In the original work of Hick, (1952) and Hyman, (1953), along with many follow-up studies, uncertainty regarding the stimulus and uncertainty regarding the response were confounded such that the relative importance of these two factors remains mostly unknown. The present work first replicates Hick/Hyman Law with a new set of stimuli and then goes on to separately estimate the roles of stimulus and response uncertainty. The results demonstrate that, for a popular type of task-visual stimuli mapped to vocal responses-response uncertainty accounts for a majority of the effect. The results justify a revised expression of Hick/Hyman Law and place strong constraints on theoretical accounts of the law, as well as models of response selection in general.
TSUNAMI Primer: A Primer for Sensitivity/Uncertainty Calculations with SCALE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Mueller, Don; Bowman, Stephen M
2009-01-01
This primer presents examples in the application of the SCALE/TSUNAMI tools to generate k{sub eff} sensitivity data for one- and three-dimensional models using TSUNAMI-1D and -3D and to examine uncertainties in the computed k{sub eff} values due to uncertainties in the cross-section data used in their calculation. The proper use of unit cell data and need for confirming the appropriate selection of input parameters through direct perturbations are described. The uses of sensitivity and uncertainty data to identify and rank potential sources of computational bias in an application system and TSUNAMI tools for assessment of system similarity using sensitivity andmore » uncertainty criteria are demonstrated. Uses of these criteria in trending analyses to assess computational biases, bias uncertainties, and gap analyses are also described. Additionally, an application of the data adjustment tool TSURFER is provided, including identification of specific details of sources of computational bias.« less
Uncertainty Analysis for a Jet Flap Airfoil
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Cruz, Josue
2006-01-01
An analysis of variance (ANOVA) study was performed to quantify the potential uncertainties of lift and pitching moment coefficient calculations from a computational fluid dynamics code, relative to an experiment, for a jet flap airfoil configuration. Uncertainties due to a number of factors including grid density, angle of attack and jet flap blowing coefficient were examined. The ANOVA software produced a numerical model of the input coefficient data, as functions of the selected factors, to a user-specified order (linear, 2-factor interference, quadratic, or cubic). Residuals between the model and actual data were also produced at each of the input conditions, and uncertainty confidence intervals (in the form of Least Significant Differences or LSD) for experimental, computational, and combined experimental / computational data sets were computed. The LSD bars indicate the smallest resolvable differences in the functional values (lift or pitching moment coefficient) attributable solely to changes in independent variable, given just the input data points from selected data sets. The software also provided a collection of diagnostics which evaluate the suitability of the input data set for use within the ANOVA process, and which examine the behavior of the resultant data, possibly suggesting transformations which should be applied to the data to reduce the LSD. The results illustrate some of the key features of, and results from, the uncertainty analysis studies, including the use of both numerical (continuous) and categorical (discrete) factors, the effects of the number and range of the input data points, and the effects of the number of factors considered simultaneously.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Genser, Krzysztof; Hatcher, Robert; Kelsey, Michael
The Geant4 simulation toolkit is used to model interactions between particles and matter. Geant4 employs a set of validated physics models that span a wide range of interaction energies. These models rely on measured cross-sections and phenomenological models with the physically motivated parameters that are tuned to cover many application domains. To study what uncertainties are associated with the Geant4 physics models we have designed and implemented a comprehensive, modular, user-friendly software toolkit that allows the variation of one or more parameters of one or more Geant4 physics models involved in simulation studies. It also enables analysis of multiple variantsmore » of the resulting physics observables of interest in order to estimate the uncertainties associated with the simulation model choices. Based on modern event-processing infrastructure software, the toolkit offers a variety of attractive features, e.g. exible run-time con gurable work ow, comprehensive bookkeeping, easy to expand collection of analytical components. Design, implementation technology, and key functionalities of the toolkit are presented in this paper and illustrated with selected results.« less
Quantifying natural delta variability using a multiple-point geostatistics prior uncertainty model
NASA Astrophysics Data System (ADS)
Scheidt, Céline; Fernandes, Anjali M.; Paola, Chris; Caers, Jef
2016-10-01
We address the question of quantifying uncertainty associated with autogenic pattern variability in a channelized transport system by means of a modern geostatistical method. This question has considerable relevance for practical subsurface applications as well, particularly those related to uncertainty quantification relying on Bayesian approaches. Specifically, we show how the autogenic variability in a laboratory experiment can be represented and reproduced by a multiple-point geostatistical prior uncertainty model. The latter geostatistical method requires selection of a limited set of training images from which a possibly infinite set of geostatistical model realizations, mimicking the training image patterns, can be generated. To that end, we investigate two methods to determine how many training images and what training images should be provided to reproduce natural autogenic variability. The first method relies on distance-based clustering of overhead snapshots of the experiment; the second method relies on a rate of change quantification by means of a computer vision algorithm termed the demon algorithm. We show quantitatively that with either training image selection method, we can statistically reproduce the natural variability of the delta formed in the experiment. In addition, we study the nature of the patterns represented in the set of training images as a representation of the "eigenpatterns" of the natural system. The eigenpattern in the training image sets display patterns consistent with previous physical interpretations of the fundamental modes of this type of delta system: a highly channelized, incisional mode; a poorly channelized, depositional mode; and an intermediate mode between the two.
Price, Malcolm J; Welton, Nicky J; Briggs, Andrew H; Ades, A E
2011-01-01
Standard approaches to estimation of Markov models with data from randomized controlled trials tend either to make a judgment about which transition(s) treatments act on, or they assume that treatment has a separate effect on every transition. An alternative is to fit a series of models that assume that treatment acts on specific transitions. Investigators can then choose among alternative models using goodness-of-fit statistics. However, structural uncertainty about any chosen parameterization will remain and this may have implications for the resulting decision and the need for further research. We describe a Bayesian approach to model estimation, and model selection. Structural uncertainty about which parameterization to use is accounted for using model averaging and we developed a formula for calculating the expected value of perfect information (EVPI) in averaged models. Marginal posterior distributions are generated for each of the cost-effectiveness parameters using Markov Chain Monte Carlo simulation in WinBUGS, or Monte-Carlo simulation in Excel (Microsoft Corp., Redmond, WA). We illustrate the approach with an example of treatments for asthma using aggregate-level data from a connected network of four treatments compared in three pair-wise randomized controlled trials. The standard errors of incremental net benefit using structured models is reduced by up to eight- or ninefold compared to the unstructured models, and the expected loss attaching to decision uncertainty by factors of several hundreds. Model averaging had considerable influence on the EVPI. Alternative structural assumptions can alter the treatment decision and have an overwhelming effect on model uncertainty and expected value of information. Structural uncertainty can be accounted for by model averaging, and the EVPI can be calculated for averaged models. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Robustness of risk maps and survey networks to knowledge gaps about a new invasive pest.
Yemshanov, Denys; Koch, Frank H; Ben-Haim, Yakov; Smith, William D
2010-02-01
In pest risk assessment it is frequently necessary to make management decisions regarding emerging threats under severe uncertainty. Although risk maps provide useful decision support for invasive alien species, they rarely address knowledge gaps associated with the underlying risk model or how they may change the risk estimates. Failure to recognize uncertainty leads to risk-ignorant decisions and miscalculation of expected impacts as well as the costs required to minimize these impacts. Here we use the information gap concept to evaluate the robustness of risk maps to uncertainties in key assumptions about an invading organism. We generate risk maps with a spatial model of invasion that simulates potential entries of an invasive pest via international marine shipments, their spread through a landscape, and establishment on a susceptible host. In particular, we focus on the question of how much uncertainty in risk model assumptions can be tolerated before the risk map loses its value. We outline this approach with an example of a forest pest recently detected in North America, Sirex noctilio Fabricius. The results provide a spatial representation of the robustness of predictions of S. noctilio invasion risk to uncertainty and show major geographic hotspots where the consideration of uncertainty in model parameters may change management decisions about a new invasive pest. We then illustrate how the dependency between the extent of uncertainties and the degree of robustness of a risk map can be used to select a surveillance network design that is most robust to knowledge gaps about the pest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lobell, D; Field, C; Cahill, K
2006-01-10
Most research on the agricultural impacts of climate change has focused on the major annual crops, yet perennial cropping systems are less adaptable and thus potentially more susceptible to damage. Improved assessments of yield responses to future climate are needed to prioritize adaptation strategies in the many regions where perennial crops are economically and culturally important. These impact assessments, in turn, must rely on climate and crop models that contain often poorly defined uncertainties. We evaluated the impact of climate change on six major perennial crops in California: wine grapes, almonds, table grapes, oranges, walnuts, and avocados. Outputs from multiplemore » climate models were used to evaluate climate uncertainty, while multiple statistical crop models, derived by resampling historical databases, were used to address crop response uncertainties. We find that, despite these uncertainties, climate change in California is very likely to put downward pressure on yields of almonds, walnuts, avocados, and table grapes by 2050. Without CO{sub 2} fertilization or adaptation measures, projected losses range from 0 to >40% depending on the crop and the trajectory of climate change. Climate change uncertainty generally had a larger impact on projections than crop model uncertainty, although the latter was substantial for several crops. Opportunities for expansion into cooler regions are identified, but this adaptation would require substantial investments and may be limited by non-climatic constraints. Given the long time scales for growth and production of orchards and vineyards ({approx}30 years), climate change should be an important factor in selecting perennial varieties and deciding whether and where perennials should be planted.« less
Hunt, Randall J.; Westenbroek, Stephen M.; Walker, John F.; Selbig, William R.; Regan, R. Steven; Leaf, Andrew T.; Saad, David A.
2016-08-23
Potential future changes in air temperature drivers were consistently upward regardless of General Circulation Model and emission scenario selected; thus, simulated stream temperatures are forecast to increase appreciably with future climate. However, the amount of temperature increase was variable. Such uncertainty is reflected in temperature model results, along with uncertainty in the groundwater/surface-water interaction itself. The estimated increase in annual average temperature ranged from approximately 3 to 6 degrees Celsius by 2100 in the upper reaches of Black Earth Creek and 2 to 4 degrees Celsius in reaches farther downstream. As with all forecasts that rely on projections of an unknowable future, the results are best considered to approximate potential outcomes of climate change given the underlying uncertainty.
An Approach to Experimental Design for the Computer Analysis of Complex Phenomenon
NASA Technical Reports Server (NTRS)
Rutherford, Brian
2000-01-01
The ability to make credible system assessments, predictions and design decisions related to engineered systems and other complex phenomenon is key to a successful program for many large-scale investigations in government and industry. Recently, many of these large-scale analyses have turned to computational simulation to provide much of the required information. Addressing specific goals in the computer analysis of these complex phenomenon is often accomplished through the use of performance measures that are based on system response models. The response models are constructed using computer-generated responses together with physical test results where possible. They are often based on probabilistically defined inputs and generally require estimation of a set of response modeling parameters. As a consequence, the performance measures are themselves distributed quantities reflecting these variabilities and uncertainties. Uncertainty in the values of the performance measures leads to uncertainties in predicted performance and can cloud the decisions required of the analysis. A specific goal of this research has been to develop methodology that will reduce this uncertainty in an analysis environment where limited resources and system complexity together restrict the number of simulations that can be performed. An approach has been developed that is based on evaluation of the potential information provided for each "intelligently selected" candidate set of computer runs. Each candidate is evaluated by partitioning the performance measure uncertainty into two components - one component that could be explained through the additional computational simulation runs and a second that would remain uncertain. The portion explained is estimated using a probabilistic evaluation of likely results for the additional computational analyses based on what is currently known about the system. The set of runs indicating the largest potential reduction in uncertainty is then selected and the computational simulations are performed. Examples are provided to demonstrate this approach on small scale problems. These examples give encouraging results. Directions for further research are indicated.
Bayesian effect estimation accounting for adjustment uncertainty.
Wang, Chi; Parmigiani, Giovanni; Dominici, Francesca
2012-09-01
Model-based estimation of the effect of an exposure on an outcome is generally sensitive to the choice of which confounding factors are included in the model. We propose a new approach, which we call Bayesian adjustment for confounding (BAC), to estimate the effect of an exposure of interest on the outcome, while accounting for the uncertainty in the choice of confounders. Our approach is based on specifying two models: (1) the outcome as a function of the exposure and the potential confounders (the outcome model); and (2) the exposure as a function of the potential confounders (the exposure model). We consider Bayesian variable selection on both models and link the two by introducing a dependence parameter, ω, denoting the prior odds of including a predictor in the outcome model, given that the same predictor is in the exposure model. In the absence of dependence (ω= 1), BAC reduces to traditional Bayesian model averaging (BMA). In simulation studies, we show that BAC, with ω > 1, estimates the exposure effect with smaller bias than traditional BMA, and improved coverage. We, then, compare BAC, a recent approach of Crainiceanu, Dominici, and Parmigiani (2008, Biometrika 95, 635-651), and traditional BMA in a time series data set of hospital admissions, air pollution levels, and weather variables in Nassau, NY for the period 1999-2005. Using each approach, we estimate the short-term effects of on emergency admissions for cardiovascular diseases, accounting for confounding. This application illustrates the potentially significant pitfalls of misusing variable selection methods in the context of adjustment uncertainty. © 2012, The International Biometric Society.
Tremblay, Raymond L.; Ackerman, James D.; Pérez, Maria-Eglée
2010-01-01
Evolutionary models estimating phenotypic selection in character size usually assume that the character is invariant across reproductive bouts. We show that variation in the size of reproductive traits may be large over multiple events and can influence fitness in organisms where these traits are produced anew each season. With data from populations of two orchid species, Caladenia valida and Tolumnia variegata, we used Bayesian statistics to investigate the effect on the distribution in fitness of individuals when the fitness landscape is not flat and when characters vary across reproductive bouts. Inconsistency in character size across reproductive periods within an individual increases the uncertainty of mean fitness and, consequently, the uncertainty in individual fitness. The trajectory of selection is likely to be muddled as a consequence of variation in morphology of individuals across reproductive bouts. The frequency and amplitude of such changes will certainly affect the dynamics between selection and genetic drift. PMID:20047875
Trimming the UCERF2 hazard logic tree
Porter, Keith A.; Field, Edward H.; Milner, Kevin
2012-01-01
The Uniform California Earthquake Rupture Forecast 2 (UCERF2) is a fully time‐dependent earthquake rupture forecast developed with sponsorship of the California Earthquake Authority (Working Group on California Earthquake Probabilities [WGCEP], 2007; Field et al., 2009). UCERF2 contains 480 logic‐tree branches reflecting choices among nine modeling uncertainties in the earthquake rate model shown in Figure 1. For seismic hazard analysis, it is also necessary to choose a ground‐motion‐prediction equation (GMPE) and set its parameters. Choosing among four next‐generation attenuation (NGA) relationships results in a total of 1920 hazard calculations per site. The present work is motivated by a desire to reduce the computational effort involved in a hazard analysis without understating uncertainty. We set out to assess which branching points of the UCERF2 logic tree contribute most to overall uncertainty, and which might be safely ignored (set to only one branch) without significantly biasing results or affecting some useful measure of uncertainty. The trimmed logic tree will have all of the original choices from the branching points that contribute significantly to uncertainty, but only one arbitrarily selected choice from the branching points that do not.
Sun, Guibo; Webster, Chris; Ni, Michael Y; Zhang, Xiaohu
2018-05-07
Uncertainty with respect to built environment (BE) data collection, measure conceptualization and spatial scales is evident in urban health research, but most findings are from relatively lowdensity contexts. We selected Hong Kong, an iconic high-density city, as the study area as limited research has been conducted on uncertainty in such areas. We used geocoded home addresses (n=5732) from a large population-based cohort in Hong Kong to extract BE measures for the participants' place of residence based on an internationally recognized BE framework. Variability of the measures was mapped and Spearman's rank correlation calculated to assess how well the relationships among indicators are preserved across variables and spatial scales. We found extreme variations and uncertainties for the 180 measures collected using comprehensive data and advanced geographic information systems modelling techniques. We highlight the implications of methodological selection and spatial scales of the measures. The results suggest that more robust information regarding urban health research in high-density city would emerge if greater consideration were given to BE data, design methods and spatial scales of the BE measures.
A scenario optimization model for dynamic reserve site selection
Stephanie A. Snyder; Robert G. Haight; Charles S. ReVelle
2004-01-01
Conservation planners are called upon to make choices and trade-offs about the preservation of natural areas for the protection of species in the face of development pressures. We addressed the problem of selecting sites for protection over time with the objective of maximizing species representation, with uncertainty about future site development, and with periodic...
NASA Astrophysics Data System (ADS)
Kasiviswanathan, K.; Sudheer, K.
2013-05-01
Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived prediction interval for a selected hydrograph in the validation data set is presented in Fig 1. It is noted that most of the observed flows lie within the constructed prediction interval, and therefore provides information about the uncertainty of the prediction. One specific advantage of the method is that when ensemble mean value is considered as a forecast, the peak flows are predicted with improved accuracy by this method compared to traditional single point forecasted ANNs. Fig. 1 Prediction Interval for selected hydrograph
Uncertainty in age-specific harvest estimates and consequences for white-tailed deer management
Collier, B.A.; Krementz, D.G.
2007-01-01
Age structure proportions (proportion of harvested individuals within each age class) are commonly used as support for regulatory restrictions and input for deer population models. Such use requires critical evaluation when harvest regulations force hunters to selectively harvest specific age classes, due to impact on the underlying population age structure. We used a stochastic population simulation model to evaluate the impact of using harvest proportions to evaluate changes in population age structure under a selective harvest management program at two scales. Using harvest proportions to parameterize the age-specific harvest segment of the model for the local scale showed that predictions of post-harvest age structure did not vary dependent upon whether selective harvest criteria were in use or not. At the county scale, yearling frequency in the post-harvest population increased, but model predictions indicated that post-harvest population size of 2.5 years old males would decline below levels found before implementation of the antler restriction, reducing the number of individuals recruited into older age classes. Across the range of age-specific harvest rates modeled, our simulation predicted that underestimation of age-specific harvest rates has considerable influence on predictions of post-harvest population age structure. We found that the consequence of uncertainty in harvest rates corresponds to uncertainty in predictions of residual population structure, and this correspondence is proportional to scale. Our simulations also indicate that regardless of use of harvest proportions or harvest rates, at either the local or county scale the modeled SHC had a high probability (>0.60 and >0.75, respectively) of eliminating recruitment into >2.5 years old age classes. Although frequently used to increase population age structure, our modeling indicated that selective harvest criteria can decrease or eliminate the number of white-tailed deer recruited into older age classes. Thus, we suggest that using harvest proportions for management planning and evaluation should be viewed with caution. In addition, we recommend that managers focus more attention on estimation of age-specific harvest rates, and modeling approaches which combine harvest rates with information from harvested individuals to further increase their ability to effectively manage deer populations under selective harvest programs. ?? 2006 Elsevier B.V. All rights reserved.
Oparaji, Uchenna; Sheu, Rong-Jiun; Bankhead, Mark; Austin, Jonathan; Patelli, Edoardo
2017-12-01
Artificial Neural Networks (ANNs) are commonly used in place of expensive models to reduce the computational burden required for uncertainty quantification, reliability and sensitivity analyses. ANN with selected architecture is trained with the back-propagation algorithm from few data representatives of the input/output relationship of the underlying model of interest. However, different performing ANNs might be obtained with the same training data as a result of the random initialization of the weight parameters in each of the network, leading to an uncertainty in selecting the best performing ANN. On the other hand, using cross-validation to select the best performing ANN based on the ANN with the highest R 2 value can lead to biassing in the prediction. This is as a result of the fact that the use of R 2 cannot determine if the prediction made by ANN is biased. Additionally, R 2 does not indicate if a model is adequate, as it is possible to have a low R 2 for a good model and a high R 2 for a bad model. Hence, in this paper, we propose an approach to improve the robustness of a prediction made by ANN. The approach is based on a systematic combination of identical trained ANNs, by coupling the Bayesian framework and model averaging. Additionally, the uncertainties of the robust prediction derived from the approach are quantified in terms of confidence intervals. To demonstrate the applicability of the proposed approach, two synthetic numerical examples are presented. Finally, the proposed approach is used to perform a reliability and sensitivity analyses on a process simulation model of a UK nuclear effluent treatment plant developed by National Nuclear Laboratory (NNL) and treated in this study as a black-box employing a set of training data as a test case. This model has been extensively validated against plant and experimental data and used to support the UK effluent discharge strategy. Copyright © 2017 Elsevier Ltd. All rights reserved.
A smart Monte Carlo procedure for production costing and uncertainty analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, C.; Stremel, J.
1996-11-01
Electric utilities using chronological production costing models to decide whether to buy or sell power over the next week or next few weeks need to determine potential profits or losses under a number of uncertainties. A large amount of money can be at stake--often $100,000 a day or more--and one party of the sale must always take on the risk. In the case of fixed price ($/MWh) contracts, the seller accepts the risk. In the case of cost plus contracts, the buyer must accept the risk. So, modeling uncertainty and understanding the risk accurately can improve the competitive edge ofmore » the user. This paper investigates an efficient procedure for representing risks and costs from capacity outages. Typically, production costing models use an algorithm based on some form of random number generator to select resources as available or on outage. These algorithms allow experiments to be repeated and gains and losses to be observed in a short time. The authors perform several experiments to examine the capability of three unit outage selection methods and measures their results. Specifically, a brute force Monte Carlo procedure, a Monte Carlo procedure with Latin Hypercube sampling, and a Smart Monte Carlo procedure with cost stratification and directed sampling are examined.« less
NASA Astrophysics Data System (ADS)
Zhang, G.; Chen, F.; Gan, Y.
2017-12-01
Assessing and mitigating uncertainties in the Noah-MP land-model simulations over the Tibet Plateau region Guo Zhang1, Fei Chen1,2, Yanjun Gan11State Key Laboratory of Severe Weather, Chinese Academy of Meteorological Sciences, Beijing, China 2National Center for Atmospheric Research, Boulder, Colorado, USA Uncertainties in the Noah with multiparameterization (Noah-MP) land surface model were assessed through physics ensemble simulations for four sparsely-vegetated sites located in the Tibetan Plateau region. Those simulations were evaluated using observations at the four sites during the third Tibetan Plateau Experiment (TIPEX III).The impacts of uncertainties in precipitation data used as forcing conditions, parameterizations of sub-processes such as soil organic matter and rhizosphere on physics-ensemble simulations are identified using two different methods: the natural selection and Tukey's test. This study attempts to answer the following questions: 1) what is the relative contribution of precipitation-forcing uncertainty to the overall uncertainty range of Noah-MP simulations at those sites as compared to that at a more moisture and densely vegetated site; 2) what are the most sensitive physical parameterization for those sites; 3) can we identify the parameterizations that need to be improved? The investigation was conducted by evaluating simulated seasonal evolution of soil temperature, soilmoisture, surface heat fluxes through a number of Noah-MP ensemble simulations.
NASA Astrophysics Data System (ADS)
Samaniego, Luis; Kumar, Rohini; Pechlivanidis, Illias; Breuer, Lutz; Wortmann, Michel; Vetter, Tobias; Flörke, Martina; Chamorro, Alejandro; Schäfer, David; Shah, Harsh; Zeng, Xiaofan
2016-04-01
The quantification of the predictive uncertainty in hydrologic models and their attribution to its main sources is of particular interest in climate change studies. In recent years, a number of studies have been aimed at assessing the ability of hydrologic models (HMs) to reproduce extreme hydrologic events. Disentangling the overall uncertainty of streamflow -including its derived low-flow characteristics- into individual contributions, stemming from forcings and model structure, has also been studied. Based on recent literature, it can be stated that there is a controversy with respect to which source is the largest (e.g., Teng, et al. 2012, Bosshard et al. 2013, Prudhomme et al. 2014). Very little has also been done to estimate the relative impact of the parametric uncertainty of the HMs with respect to overall uncertainty of low-flow characteristics. The ISI-MIP2 project provides a unique opportunity to understand the propagation of forcing and model structure uncertainties into century-long time series of drought characteristics. This project defines a consistent framework to deal with compatible initial conditions for the HMs and a set of standardized historical and future forcings. Moreover, the ensemble of hydrologic model predictions varies across a broad range of climate scenarios and regions. To achieve this goal, we use six preconditioned hydrologic models (HYPE or HBV, mHM, SWIM, VIC, and WaterGAP3) set up in seven large continental river basins: Amazon, Blue Nile, Ganges, Niger, Mississippi, Rhine, Yellow. These models are forced with bias-corrected outputs of five CMIP5 general circulation models (GCM) under four extreme representative concentration pathway (RCP) scenarios (i.e. 2.6, 4.5, 6.0, and 8.5 Wm-2) for the period 1971-2099. Simulated streamflow is transformed into a monthly runoff index (RI) to analyze the attribution of the GCM and HM uncertainty into drought magnitude and duration over time. Uncertainty contributions are investigated during periods: 1) 2006-2035, 2) 2036-2065 and 3) 2070-2099. Results presented in Samaniego et al. 2015 (submitted) indicate that GCM uncertainty mostly dominates over HM uncertainty for predictions of runoff drought characteristics, irrespective of the selected RCP and region. For the mHM model, in particular, GCM uncertainty always dominates over parametric uncertainty. In general, the overall uncertainty increases with time. The larger the radiative forcing of the RCP, the larger the uncertainty in drought characteristics, however, the propagation of the GCM uncertainty onto a drought characteristic depends largely upon the hydro-climatic regime. While our study emphasizes the need for multi-model ensembles for the assessment of future drought projections, the agreement between GCM forcings is still weak to draw conclusive recommendations. References: L. Samaniego, R. Kumar, I. G. Pechlivanidis, L. Breuer, M. Wortmann, T. Vetter, M. Flörke, A. Chamorro, D. Schäfer, H. Shah, X. Zeng: Propagation of forcing and model uncertainty into hydrological drought characteristics in a multi-model century-long experiment in continental river basins. Submitted to Climatic Change on Dec 2015. Bosshard, et al. 2013. doi:10.1029/2011WR011533. Prudhomme et al. 2014, doi:10.1073/pnas.1222473110. Teng, et al. 2012, doi:10.1175/JHM-D-11-058.1.
Input variable selection and calibration data selection for storm water quality regression models.
Sun, Siao; Bertrand-Krajewski, Jean-Luc
2013-01-01
Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.
Hydrological model parameter dimensionality is a weak measure of prediction uncertainty
NASA Astrophysics Data System (ADS)
Pande, S.; Arkesteijn, L.; Savenije, H.; Bastidas, L. A.
2015-04-01
This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is analyzed in a step by step manner. This is done measuring differences between simulations of a model under different realizations of input forcings. Algorithms are then suggested to estimate model complexity. Model complexities of the two model structures, SAC-SMA (Sacramento Soil Moisture Accounting) and its simplified version SIXPAR (Six Parameter Model), are computed on resampled input data sets from basins that span across the continental US. The model complexities for SIXPAR are estimated for various parameter ranges. It is shown that complexity of SIXPAR increases with lower storage capacity and/or higher recession coefficients. Thus it is argued that a conceptually simple model structure, such as SIXPAR, can be more complex than an intuitively more complex model structure, such as SAC-SMA for certain parameter ranges. We therefore contend that magnitudes of feasible model parameters influence the complexity of the model selection problem just as parameter dimensionality (number of parameters) does and that parameter dimensionality is an incomplete indicator of stability of hydrological model selection and prediction problems.
Uncertainties in selected river water quality data
NASA Astrophysics Data System (ADS)
Rode, M.; Suhr, U.
2007-02-01
Monitoring of surface waters is primarily done to detect the status and trends in water quality and to identify whether observed trends arise from natural or anthropogenic causes. Empirical quality of river water quality data is rarely certain and knowledge of their uncertainties is essential to assess the reliability of water quality models and their predictions. The objective of this paper is to assess the uncertainties in selected river water quality data, i.e. suspended sediment, nitrogen fraction, phosphorus fraction, heavy metals and biological compounds. The methodology used to structure the uncertainty is based on the empirical quality of data and the sources of uncertainty in data (van Loon et al., 2005). A literature review was carried out including additional experimental data of the Elbe river. All data of compounds associated with suspended particulate matter have considerable higher sampling uncertainties than soluble concentrations. This is due to high variability within the cross section of a given river. This variability is positively correlated with total suspended particulate matter concentrations. Sampling location has also considerable effect on the representativeness of a water sample. These sampling uncertainties are highly site specific. The estimation of uncertainty in sampling can only be achieved by taking at least a proportion of samples in duplicates. Compared to sampling uncertainties, measurement and analytical uncertainties are much lower. Instrument quality can be stated well suited for field and laboratory situations for all considered constituents. Analytical errors can contribute considerably to the overall uncertainty of river water quality data. Temporal autocorrelation of river water quality data is present but literature on general behaviour of water quality compounds is rare. For meso scale river catchments (500-3000 km2) reasonable yearly dissolved load calculations can be achieved using biweekly sample frequencies. For suspended sediments none of the methods investigated produced very reliable load estimates when weekly concentrations data were used. Uncertainties associated with loads estimates based on infrequent samples will decrease with increasing size of rivers.
Uncertainties in selected surface water quality data
NASA Astrophysics Data System (ADS)
Rode, M.; Suhr, U.
2006-09-01
Monitoring of surface waters is primarily done to detect the status and trends in water quality and to identify whether observed trends arise form natural or anthropogenic causes. Empirical quality of surface water quality data is rarely certain and knowledge of their uncertainties is essential to assess the reliability of water quality models and their predictions. The objective of this paper is to assess the uncertainties in selected surface water quality data, i.e. suspended sediment, nitrogen fraction, phosphorus fraction, heavy metals and biological compounds. The methodology used to structure the uncertainty is based on the empirical quality of data and the sources of uncertainty in data (van Loon et al., 2006). A literature review was carried out including additional experimental data of the Elbe river. All data of compounds associated with suspended particulate matter have considerable higher sampling uncertainties than soluble concentrations. This is due to high variability's within the cross section of a given river. This variability is positively correlated with total suspended particulate matter concentrations. Sampling location has also considerable effect on the representativeness of a water sample. These sampling uncertainties are highly site specific. The estimation of uncertainty in sampling can only be achieved by taking at least a proportion of samples in duplicates. Compared to sampling uncertainties measurement and analytical uncertainties are much lower. Instrument quality can be stated well suited for field and laboratory situations for all considered constituents. Analytical errors can contribute considerable to the overall uncertainty of surface water quality data. Temporal autocorrelation of surface water quality data is present but literature on general behaviour of water quality compounds is rare. For meso scale river catchments reasonable yearly dissolved load calculations can be achieved using biweekly sample frequencies. For suspended sediments none of the methods investigated produced very reliable load estimates when weekly concentrations data were used. Uncertainties associated with loads estimates based on infrequent samples will decrease with increasing size of rivers.
Final Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef; Conrad, Patrick; Bigoni, Daniele
QUEST (\\url{www.quest-scidac.org}) is a SciDAC Institute that is focused on uncertainty quantification (UQ) in large-scale scientific computations. Our goals are to (1) advance the state of the art in UQ mathematics, algorithms, and software; and (2) provide modeling, algorithmic, and general UQ expertise, together with software tools, to other SciDAC projects, thereby enabling and guiding a broad range of UQ activities in their respective contexts. QUEST is a collaboration among six institutions (Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University) with a historymore » of joint UQ research. Our vision encompasses all aspects of UQ in leadership-class computing. This includes the well-founded setup of UQ problems; characterization of the input space given available data/information; local and global sensitivity analysis; adaptive dimensionality and order reduction; forward and inverse propagation of uncertainty; handling of application code failures, missing data, and hardware/software fault tolerance; and model inadequacy, comparison, validation, selection, and averaging. The nature of the UQ problem requires the seamless combination of data, models, and information across this landscape in a manner that provides a self-consistent quantification of requisite uncertainties in predictions from computational models. Accordingly, our UQ methods and tools span an interdisciplinary space across applied math, information theory, and statistics. The MIT QUEST effort centers on statistical inference and methods for surrogate or reduced-order modeling. MIT personnel have been responsible for the development of adaptive sampling methods, methods for approximating computationally intensive models, and software for both forward uncertainty propagation and statistical inverse problems. A key software product of the MIT QUEST effort is the MIT Uncertainty Quantification library, called MUQ (\\url{muq.mit.edu}).« less
Plasticity models of material variability based on uncertainty quantification techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Reese E.; Rizzi, Francesco; Boyce, Brad
The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQmore » techniques can be used in model selection and assessing the quality of calibrated physical parameters.« less
New Multi-objective Uncertainty-based Algorithm for Water Resource Models' Calibration
NASA Astrophysics Data System (ADS)
Keshavarz, Kasra; Alizadeh, Hossein
2017-04-01
Water resource models are powerful tools to support water management decision making process and are developed to deal with a broad range of issues including land use and climate change impacts analysis, water allocation, systems design and operation, waste load control and allocation, etc. These models are divided into two categories of simulation and optimization models whose calibration has been addressed in the literature where great relevant efforts in recent decades have led to two main categories of auto-calibration methods of uncertainty-based algorithms such as GLUE, MCMC and PEST and optimization-based algorithms including single-objective optimization such as SCE-UA and multi-objective optimization such as MOCOM-UA and MOSCEM-UA. Although algorithms which benefit from capabilities of both types, such as SUFI-2, were rather developed, this paper proposes a new auto-calibration algorithm which is capable of both finding optimal parameters values regarding multiple objectives like optimization-based algorithms and providing interval estimations of parameters like uncertainty-based algorithms. The algorithm is actually developed to improve quality of SUFI-2 results. Based on a single-objective, e.g. NSE and RMSE, SUFI-2 proposes a routine to find the best point and interval estimation of parameters and corresponding prediction intervals (95 PPU) of time series of interest. To assess the goodness of calibration, final results are presented using two uncertainty measures of p-factor quantifying percentage of observations covered by 95PPU and r-factor quantifying degree of uncertainty, and the analyst has to select the point and interval estimation of parameters which are actually non-dominated regarding both of the uncertainty measures. Based on the described properties of SUFI-2, two important questions are raised, answering of which are our research motivation: Given that in SUFI-2, final selection is based on the two measures or objectives and on the other hand, knowing that there is no multi-objective optimization mechanism in SUFI-2, are the final estimations Pareto-optimal? Can systematic methods be applied to select the final estimations? Dealing with these questions, a new auto-calibration algorithm was proposed where the uncertainty measures were considered as two objectives to find non-dominated interval estimations of parameters by means of coupling Monte Carlo simulation and Multi-Objective Particle Swarm Optimization. Both the proposed algorithm and SUFI-2 were applied to calibrate parameters of water resources planning model of Helleh river basin, Iran. The model is a comprehensive water quantity-quality model developed in the previous researches using WEAP software in order to analyze the impacts of different water resources management strategies including dam construction, increasing cultivation area, utilization of more efficient irrigation technologies, changing crop pattern, etc. Comparing the Pareto frontier resulted from the proposed auto-calibration algorithm with SUFI-2 results, it was revealed that the new algorithm leads to a better and also continuous Pareto frontier, even though it is more computationally expensive. Finally, Nash and Kalai-Smorodinsky bargaining methods were used to choose compromised interval estimation regarding Pareto frontier.
NASA Astrophysics Data System (ADS)
Harken, B.; Geiges, A.; Rubin, Y.
2013-12-01
There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and forward modeling and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration, plume travel time, or aquifer recharge rate. These predictions often have significant bearing on some decision that must be made. Examples include: how to allocate limited remediation resources between multiple contaminated groundwater sites, where to place a waste repository site, and what extraction rates can be considered sustainable in an aquifer. Providing an answer to these questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in model parameters, such as hydraulic conductivity, leads to uncertainty in EPM predictions. Often, field campaigns and inverse modeling efforts are planned and undertaken with reduction of parametric uncertainty as the objective. The tool of hypothesis testing allows this to be taken one step further by considering uncertainty reduction in the ultimate prediction of the EPM as the objective and gives a rational basis for weighing costs and benefits at each stage. When using the tool of statistical hypothesis testing, the EPM is cast into a binary outcome. This is formulated as null and alternative hypotheses, which can be accepted and rejected with statistical formality. When accounting for all sources of uncertainty at each stage, the level of significance of this test provides a rational basis for planning, optimization, and evaluation of the entire campaign. Case-specific information, such as consequences prediction error and site-specific costs can be used in establishing selection criteria based on what level of risk is deemed acceptable. This framework is demonstrated and discussed using various synthetic case studies. The case studies involve contaminated aquifers where a decision must be made based on prediction of when a contaminant will arrive at a given location. The EPM, in this case contaminant travel time, is cast into the hypothesis testing framework. The null hypothesis states that the contaminant plume will arrive at the specified location before a critical value of time passes, and the alternative hypothesis states that the plume will arrive after the critical time passes. Different field campaigns are analyzed based on effectiveness in reducing the probability of selecting the wrong hypothesis, which in this case corresponds to reducing uncertainty in the prediction of plume arrival time. To examine the role of inverse modeling in this framework, case studies involving both Maximum Likelihood parameter estimation and Bayesian inversion are used.
Quantitative estimation of source complexity in tsunami-source inversion
NASA Astrophysics Data System (ADS)
Dettmer, Jan; Cummins, Phil R.; Hawkins, Rhys; Jakir Hossen, M.
2016-04-01
This work analyses tsunami waveforms to infer the spatiotemporal evolution of sea-surface displacement (the tsunami source) caused by earthquakes or other sources. Since the method considers sea-surface displacement directly, no assumptions about the fault or seafloor deformation are required. While this approach has no ability to study seismic aspects of rupture, it greatly simplifies the tsunami source estimation, making it much less dependent on subjective fault and deformation assumptions. This results in a more accurate sea-surface displacement evolution in the source region. The spatial discretization is by wavelet decomposition represented by a trans-D Bayesian tree structure. Wavelet coefficients are sampled by a reversible jump algorithm and additional coefficients are only included when required by the data. Therefore, source complexity is consistent with data information (parsimonious) and the method can adapt locally in both time and space. Since the source complexity is unknown and locally adapts, no regularization is required, resulting in more meaningful displacement magnitudes. By estimating displacement uncertainties in a Bayesian framework we can study the effect of parametrization choice on the source estimate. Uncertainty arises from observation errors and limitations in the parametrization to fully explain the observations. As a result, parametrization choice is closely related to uncertainty estimation and profoundly affects inversion results. Therefore, parametrization selection should be included in the inference process. Our inversion method is based on Bayesian model selection, a process which includes the choice of parametrization in the inference process and makes it data driven. A trans-dimensional (trans-D) model for the spatio-temporal discretization is applied here to include model selection naturally and efficiently in the inference by sampling probabilistically over parameterizations. The trans-D process results in better uncertainty estimates since the parametrization adapts parsimoniously (in both time and space) according to the local data resolving power and the uncertainty about the parametrization choice is included in the uncertainty estimates. We apply the method to the tsunami waveforms recorded for the great 2011 Japan tsunami. All data are recorded on high-quality sensors (ocean-bottom pressure sensors, GPS gauges, and DART buoys). The sea-surface Green's functions are computed by JAGURS and include linear dispersion effects. By treating the noise level at each gauge as unknown, individual gauge contributions to the source estimate are appropriately and objectively weighted. The results show previously unreported detail of the source, quantify uncertainty spatially, and produce excellent data fits. The source estimate shows an elongated peak trench-ward from the hypo centre that closely follows the trench, indicating significant sea-floor deformation near the trench. Also notable is a bi-modal (negative to positive) displacement feature in the northern part of the source near the trench. The feature has ~2 m amplitude and is clearly resolved by the data with low uncertainties.
Micropollutants throughout an integrated urban drainage model: Sensitivity and uncertainty analysis
NASA Astrophysics Data System (ADS)
Mannina, Giorgio; Cosenza, Alida; Viviani, Gaspare
2017-11-01
The paper presents the sensitivity and uncertainty analysis of an integrated urban drainage model which includes micropollutants. Specifically, a bespoke integrated model developed in previous studies has been modified in order to include the micropollutant assessment (namely, sulfamethoxazole - SMX). The model takes into account also the interactions between the three components of the system: sewer system (SS), wastewater treatment plant (WWTP) and receiving water body (RWB). The analysis has been applied to an experimental catchment nearby Palermo (Italy): the Nocella catchment. Overall, five scenarios, each characterized by different uncertainty combinations of sub-systems (i.e., SS, WWTP and RWB), have been considered applying, for the sensitivity analysis, the Extended-FAST method in order to select the key factors affecting the RWB quality and to design a reliable/useful experimental campaign. Results have demonstrated that sensitivity analysis is a powerful tool for increasing operator confidence in the modelling results. The approach adopted here can be used for blocking some non-identifiable factors, thus wisely modifying the structure of the model and reducing the related uncertainty. The model factors related to the SS have been found to be the most relevant factors affecting the SMX modeling in the RWB when all model factors (scenario 1) or model factors of SS (scenarios 2 and 3) are varied. If the only factors related to the WWTP are changed (scenarios 4 and 5), the SMX concentration in the RWB is mainly influenced (till to 95% influence of the total variance for SSMX,max) by the aerobic sorption coefficient. A progressive uncertainty reduction from the upstream to downstream was found for the soluble fraction of SMX in the RWB.
Sun, Y.; Tong, C.; Trainor-Guitten, W. J.; ...
2012-12-20
The risk of CO 2 leakage from a deep storage reservoir into a shallow aquifer through a fault is assessed and studied using physics-specific computer models. The hypothetical CO 2 geological sequestration system is composed of three subsystems: a deep storage reservoir, a fault in caprock, and a shallow aquifer, which are modeled respectively by considering sub-domain-specific physics. Supercritical CO 2 is injected into the reservoir subsystem with uncertain permeabilities of reservoir, caprock, and aquifer, uncertain fault location, and injection rate (as a decision variable). The simulated pressure and CO 2/brine saturation are connected to the fault-leakage model as amore » boundary condition. CO 2 and brine fluxes from the fault-leakage model at the fault outlet are then imposed in the aquifer model as a source term. Moreover, uncertainties are propagated from the deep reservoir model, to the fault-leakage model, and eventually to the geochemical model in the shallow aquifer, thus contributing to risk profiles. To quantify the uncertainties and assess leakage-relevant risk, we propose a global sampling-based method to allocate sub-dimensions of uncertain parameters to sub-models. The risk profiles are defined and related to CO 2 plume development for pH value and total dissolved solids (TDS) below the EPA's Maximum Contaminant Levels (MCL) for drinking water quality. A global sensitivity analysis is conducted to select the most sensitive parameters to the risk profiles. The resulting uncertainty of pH- and TDS-defined aquifer volume, which is impacted by CO 2 and brine leakage, mainly results from the uncertainty of fault permeability. Subsequently, high-resolution, reduced-order models of risk profiles are developed as functions of all the decision variables and uncertain parameters in all three subsystems.« less
Prada, A F; Chu, M L; Guzman, J A; Moriasi, D N
2017-05-15
Evaluating the effectiveness of agricultural land management practices in minimizing environmental impacts using models is challenged by the presence of inherent uncertainties during the model development stage. One issue faced during the model development stage is the uncertainty involved in model parameterization. Using a single optimized set of parameters (one snapshot) to represent baseline conditions of the system limits the applicability and robustness of the model to properly represent future or alternative scenarios. The objective of this study was to develop a framework that facilitates model parameter selection while evaluating uncertainty to assess the impacts of land management practices at the watershed scale. The model framework was applied to the Lake Creek watershed located in southwestern Oklahoma, USA. A two-step probabilistic approach was implemented to parameterize the Agricultural Policy/Environmental eXtender (APEX) model using global uncertainty and sensitivity analysis to estimate the full spectrum of total monthly water yield (WYLD) and total monthly Nitrogen loads (N) in the watershed under different land management practices. Twenty-seven models were found to represent the baseline scenario in which uncertainty of up to 29% and 400% in WYLD and N, respectively, is plausible. Changing the land cover to pasture manifested the highest decrease in N to up to 30% for a full pasture coverage while changing to full winter wheat cover can increase the N up to 11%. The methodology developed in this study was able to quantify the full spectrum of system responses, the uncertainty associated with them, and the most important parameters that drive their variability. Results from this study can be used to develop strategic decisions on the risks and tradeoffs associated with different management alternatives that aim to increase productivity while also minimizing their environmental impacts. Copyright © 2017 Elsevier Ltd. All rights reserved.
Additional Samples: Where They Should Be Located
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilger, G. G., E-mail: jfelipe@ufrgs.br; Costa, J. F. C. L.; Koppe, J. C.
2001-09-15
Information for mine planning requires to be close spaced, if compared to the grid used for exploration and resource assessment. The additional samples collected during quasimining usually are located in the same pattern of the original diamond drillholes net but closer spaced. This procedure is not the best in mathematical sense for selecting a location. The impact of an additional information to reduce the uncertainty about the parameter been modeled is not the same everywhere within the deposit. Some locations are more sensitive in reducing the local and global uncertainty than others. This study introduces a methodology to select additionalmore » sample locations based on stochastic simulation. The procedure takes into account data variability and their spatial location. Multiple equally probable models representing a geological attribute are generated via geostatistical simulation. These models share basically the same histogram and the same variogram obtained from the original data set. At each block belonging to the model a value is obtained from the n simulations and their combination allows one to access local variability. Variability is measured using an uncertainty index proposed. This index was used to map zones of high variability. A value extracted from a given simulation is added to the original data set from a zone identified as erratic in the previous maps. The process of adding samples and simulation is repeated and the benefit of the additional sample is evaluated. The benefit in terms of uncertainty reduction is measure locally and globally. The procedure showed to be robust and theoretically sound, mapping zones where the additional information is most beneficial. A case study in a coal mine using coal seam thickness illustrates the method.« less
Cumulative uncertainty in measured streamflow and water quality data for small watersheds
Harmel, R.D.; Cooper, R.J.; Slade, R.M.; Haney, R.L.; Arnold, J.G.
2006-01-01
The scientific community has not established an adequate understanding of the uncertainty inherent in measured water quality data, which is introduced by four procedural categories: streamflow measurement, sample collection, sample preservation/storage, and laboratory analysis. Although previous research has produced valuable information on relative differences in procedures within these categories, little information is available that compares the procedural categories or presents the cumulative uncertainty in resulting water quality data. As a result, quality control emphasis is often misdirected, and data uncertainty is typically either ignored or accounted for with an arbitrary margin of safety. Faced with the need for scientifically defensible estimates of data uncertainty to support water resource management, the objectives of this research were to: (1) compile selected published information on uncertainty related to measured streamflow and water quality data for small watersheds, (2) use a root mean square error propagation method to compare the uncertainty introduced by each procedural category, and (3) use the error propagation method to determine the cumulative probable uncertainty in measured streamflow, sediment, and nutrient data. Best case, typical, and worst case "data quality" scenarios were examined. Averaged across all constituents, the calculated cumulative probable uncertainty (??%) contributed under typical scenarios ranged from 6% to 19% for streamflow measurement, from 4% to 48% for sample collection, from 2% to 16% for sample preservation/storage, and from 5% to 21% for laboratory analysis. Under typical conditions, errors in storm loads ranged from 8% to 104% for dissolved nutrients, from 8% to 110% for total N and P, and from 7% to 53% for TSS. Results indicated that uncertainty can increase substantially under poor measurement conditions and limited quality control effort. This research provides introductory scientific estimates of uncertainty in measured water quality data. The results and procedures presented should also assist modelers in quantifying the "quality"of calibration and evaluation data sets, determining model accuracy goals, and evaluating model performance.
NASA Astrophysics Data System (ADS)
Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping
2016-05-01
An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness (τ), effective radius (reff), and cloud top height (h). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary data sets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available.
Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping
2016-05-27
An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness ( τ ), effective radius ( r eff ), and cloud-top height ( h ). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary datasets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that, for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available.
Medeiros, Renan Landau Paiva de; Barra, Walter; Bessa, Iury Valente de; Chaves Filho, João Edgar; Ayres, Florindo Antonio de Cavalho; Neves, Cleonor Crescêncio das
2018-02-01
This paper describes a novel robust decentralized control design methodology for a single inductor multiple output (SIMO) DC-DC converter. Based on a nominal multiple input multiple output (MIMO) plant model and performance requirements, a pairing input-output analysis is performed to select the suitable input to control each output aiming to attenuate the loop coupling. Thus, the plant uncertainty limits are selected and expressed in interval form with parameter values of the plant model. A single inductor dual output (SIDO) DC-DC buck converter board is developed for experimental tests. The experimental results show that the proposed methodology can maintain a desirable performance even in the presence of parametric uncertainties. Furthermore, the performance indexes calculated from experimental data show that the proposed methodology outperforms classical MIMO control techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Selection of climate policies under the uncertainties in the Fifth Assessment Report of the IPCC
NASA Astrophysics Data System (ADS)
Drouet, L.; Bosetti, V.; Tavoni, M.
2015-10-01
Strategies for dealing with climate change must incorporate and quantify all the relevant uncertainties, and be designed to manage the resulting risks. Here we employ the best available knowledge so far, summarized by the three working groups of the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5; refs , , ), to quantify the uncertainty of mitigation costs, climate change dynamics, and economic damage for alternative carbon budgets. We rank climate policies according to different decision-making criteria concerning uncertainty, risk aversion and intertemporal preferences. Our findings show that preferences over uncertainties are as important as the choice of the widely discussed time discount factor. Climate policies consistent with limiting warming to 2 °C above preindustrial levels are compatible with a subset of decision-making criteria and some model parametrizations, but not with the commonly adopted expected utility framework.
NASA Astrophysics Data System (ADS)
Mishra, Aashwin; Iaccarino, Gianluca
2017-11-01
In spite of their deficiencies, RANS models represent the workhorse for industrial investigations into turbulent flows. In this context, it is essential to provide diagnostic measures to assess the quality of RANS predictions. To this end, the primary step is to identify feature importances amongst massive sets of potentially descriptive and discriminative flow features. This aids the physical interpretability of the resultant discrepancy model and its extensibility to similar problems. Recent investigations have utilized approaches such as Random Forests, Support Vector Machines and the Least Absolute Shrinkage and Selection Operator for feature selection. With examples, we exhibit how such methods may not be suitable for turbulent flow datasets. The underlying rationale, such as the correlation bias and the required conditions for the success of penalized algorithms, are discussed with illustrative examples. Finally, we provide alternate approaches using convex combinations of regularized regression approaches and randomized sub-sampling in combination with feature selection algorithms, to infer model structure from data. This research was supported by the Defense Advanced Research Projects Agency under the Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) project (technical monitor: Dr Fariba Fahroo).
Maity, Arnab; Hocht, Leonhard; Heise, Christian; Holzapfel, Florian
2018-01-01
A new efficient adaptive optimal control approach is presented in this paper based on the indirect model reference adaptive control (MRAC) architecture for improvement of adaptation and tracking performance of the uncertain system. The system accounts here for both matched and unmatched unknown uncertainties that can act as plant as well as input effectiveness failures or damages. For adaptation of the unknown parameters of these uncertainties, the frequency selective learning approach is used. Its idea is to compute a filtered expression of the system uncertainty using multiple filters based on online instantaneous information, which is used for augmentation of the update law. It is capable of adjusting a sudden change in system dynamics without depending on high adaptation gains and can satisfy exponential parameter error convergence under certain conditions in the presence of structured matched and unmatched uncertainties as well. Additionally, the controller of the MRAC system is designed using a new optimal control method. This method is a new linear quadratic regulator-based optimal control formulation for both output regulation and command tracking problems. It provides a closed-form control solution. The proposed overall approach is applied in a control of lateral dynamics of an unmanned aircraft problem to show its effectiveness.
Poonam Khanijo Ahluwalia; Nema, Arvind K
2011-07-01
Selection of optimum locations for locating new facilities and decision regarding capacities at the proposed facilities is a major concern for municipal authorities/managers. The decision as to whether a single facility is preferred over multiple facilities of smaller capacities would vary with varying priorities to cost and associated risks such as environmental or health risk or risk perceived by the society. Currently management of waste streams such as that of computer waste is being done using rudimentary practices and is flourishing as an unorganized sector, mainly as backyard workshops in many cities of developing nations such as India. Uncertainty in the quantification of computer waste generation is another major concern due to the informal setup of present computer waste management scenario. Hence, there is a need to simultaneously address uncertainty in waste generation quantities while analyzing the tradeoffs between cost and associated risks. The present study aimed to address the above-mentioned issues in a multi-time-step, multi-objective decision-support model, which can address multiple objectives of cost, environmental risk, socially perceived risk and health risk, while selecting the optimum configuration of existing and proposed facilities (location and capacities).
Measurement of the top quark mass using charged particles in pp collisions at √s = 8 TeV
Khachatryan, Vardan
2016-05-18
A novel technique for measuring the mass of the top quark that uses only the kinematic properties of its charged decay products is presented. Top quark pair events with final states with one or two charged leptons and hadronic jets are selected from the data set of 8 TeV proton-proton collisions, corresponding to an integrated luminosity of 19.7 fb -1. By reconstructing secondary vertices inside the selected jets and computing the invariant mass of the system formed by the secondary vertex and an isolated lepton, an observable is constructed that is sensitive to the top quark mass that is expected tomore » be robust against the energy scale of hadronic jets. The main theoretical systematic uncertainties, concerning the modeling of the fragmentation and hadronization of b quarks and the reconstruction of secondary vertices from the decays of b hadrons, are studied. A top quark mass of 173.68±0.20(stat) -0.97 +1.58(syst) GeV is measured. Furthermore, the overall systematic uncertainty is dominated by the uncertainty in the b quark fragmentation and the modeling of kinematic properties of the top quark.« less
NASA Astrophysics Data System (ADS)
Lu, Hongwei; Ren, Lixia; Chen, Yizhong; Tian, Peipei; Liu, Jia
2017-12-01
Due to the uncertainty (i.e., fuzziness, stochasticity and imprecision) existed simultaneously during the process for groundwater remediation, the accuracy of ranking results obtained by the traditional methods has been limited. This paper proposes a cloud model based multi-attribute decision making framework (CM-MADM) with Monte Carlo for the contaminated-groundwater remediation strategies selection. The cloud model is used to handle imprecise numerical quantities, which can describe the fuzziness and stochasticity of the information fully and precisely. In the proposed approach, the contaminated concentrations are aggregated via the backward cloud generator and the weights of attributes are calculated by employing the weight cloud module. A case study on the remedial alternative selection for a contaminated site suffering from a 1,1,1-trichloroethylene leakage problem in Shanghai, China is conducted to illustrate the efficiency and applicability of the developed approach. Totally, an attribute system which consists of ten attributes were used for evaluating each alternative through the developed method under uncertainty, including daily total pumping rate, total cost and cloud model based health risk. Results indicated that A14 was evaluated to be the most preferred alternative for the 5-year, A5 for the 10-year, A4 for the 15-year and A6 for the 20-year remediation.
How Do Cultural Producers Make Creative Decisions? Lessons from the Catwalk
ERIC Educational Resources Information Center
Godart, Frederic C. Mears, Ashley
2009-01-01
Faced with high uncertainty, how do producers in the cultural economy make creative decisions? We present a case study of the fashion modeling industry. Using participant observation, interviews and network analysis of the Spring/Summer 2007 Fashion Week collections, we explain how producers select models for fashion shows. While fashion producers…
Gallagher, Daniel; Ebel, Eric D; Gallagher, Owen; Labarre, David; Williams, Michael S; Golden, Neal J; Pouillot, Régis; Dearfield, Kerry L; Kause, Janell
2013-04-01
This report illustrates how the uncertainty about food safety metrics may influence the selection of a performance objective (PO). To accomplish this goal, we developed a model concerning Listeria monocytogenes in ready-to-eat (RTE) deli meats. This application used a second order Monte Carlo model that simulates L. monocytogenes concentrations through a series of steps: the food-processing establishment, transport, retail, the consumer's home and consumption. The model accounted for growth inhibitor use, retail cross contamination, and applied an FAO/WHO dose response model for evaluating the probability of illness. An appropriate level of protection (ALOP) risk metric was selected as the average risk of illness per serving across all consumed servings-per-annum and the model was used to solve for the corresponding performance objective (PO) risk metric as the maximum allowable L. monocytogenes concentration (cfu/g) at the processing establishment where regulatory monitoring would occur. Given uncertainty about model inputs, an uncertainty distribution of the PO was estimated. Additionally, we considered how RTE deli meats contaminated at levels above the PO would be handled by the industry using three alternative approaches. Points on the PO distribution represent the probability that - if the industry complies with a particular PO - the resulting risk-per-serving is less than or equal to the target ALOP. For example, assuming (1) a target ALOP of -6.41 log10 risk of illness per serving, (2) industry concentrations above the PO that are re-distributed throughout the remaining concentration distribution and (3) no dose response uncertainty, establishment PO's of -4.98 and -4.39 log10 cfu/g would be required for 90% and 75% confidence that the target ALOP is met, respectively. The PO concentrations from this example scenario are more stringent than the current typical monitoring level of an absence in 25 g (i.e., -1.40 log10 cfu/g) or a stricter criteria of absence in 125 g (i.e., -2.1 log10 cfu/g). This example, and others, demonstrates that a PO for L. monocytogenes would be far below any current monitoring capabilities. Furthermore, this work highlights the demands placed on risk managers and risk assessors when applying uncertain risk models to the current risk metric framework. Copyright © 2013 Elsevier B.V. All rights reserved.
Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling.
Dotto, Cintia B S; Mannina, Giorgio; Kleidorfer, Manfred; Vezzaro, Luca; Henrichs, Malte; McCarthy, David T; Freni, Gabriele; Rauch, Wolfgang; Deletic, Ana
2012-05-15
Urban drainage models are important tools used by both practitioners and scientists in the field of stormwater management. These models are often conceptual and usually require calibration using local datasets. The quantification of the uncertainty associated with the models is a must, although it is rarely practiced. The International Working Group on Data and Models, which works under the IWA/IAHR Joint Committee on Urban Drainage, has been working on the development of a framework for defining and assessing uncertainties in the field of urban drainage modelling. A part of that work is the assessment and comparison of different techniques generally used in the uncertainty assessment of the parameters of water models. This paper compares a number of these techniques: the Generalized Likelihood Uncertainty Estimation (GLUE), the Shuffled Complex Evolution Metropolis algorithm (SCEM-UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multi-objective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside the same dataset. The comparison results for a well-posed rainfall/runoff model showed that the four methods provide similar probability distributions of model parameters, and model prediction intervals. For ill-posed water quality model the differences between the results were much wider; and the paper provides the specific advantages and disadvantages of each method. In relation to computational efficiency (i.e. number of iterations required to generate the probability distribution of parameters), it was found that SCEM-UA and AMALGAM produce results quicker than GLUE in terms of required number of simulations. However, GLUE requires the lowest modelling skills and is easy to implement. All non-Bayesian methods have problems with the way they accept behavioural parameter sets, e.g. GLUE, SCEM-UA and AMALGAM have subjective acceptance thresholds, while MICA has usually problem with its hypothesis on normality of residuals. It is concluded that modellers should select the method which is most suitable for the system they are modelling (e.g. complexity of the model's structure including the number of parameters), their skill/knowledge level, the available information, and the purpose of their study. Copyright © 2012 Elsevier Ltd. All rights reserved.
Granato, Gregory E.; Jones, Susan C.
2015-01-01
Results of this study indicate the potential benefits of the multi-decade simulations that SELDM provides because these simulations quantify risks and uncertainties that affect decisions made with available data and statistics. Results of the SELDM simulations indicate that the WQABI criteria concentrations may be too stringent for evaluating the stormwater quality in receiving streams, highway runoff, and BMP discharges; especially with the substantial uncertainties inherent in selecting representative data.
NASA Astrophysics Data System (ADS)
Millar, R.; Ingram, W.; Allen, M. R.; Lowe, J.
2013-12-01
Temperature and precipitation patterns are the climate variables with the greatest impacts on both natural and human systems. Due to the small spatial scales and the many interactions involved in the global hydrological cycle, in general circulation models (GCMs) representations of precipitation changes are subject to considerable uncertainty. Quantifying and understanding the causes of uncertainty (and identifying robust features of predictions) in both global and local precipitation change is an essential challenge of climate science. We have used the huge distributed computing capacity of the climateprediction.net citizen science project to examine parametric uncertainty in an ensemble of 20,000 perturbed-physics versions of the HadCM3 general circulation model. The ensemble has been selected to have a control climate in top-of-atmosphere energy balance [Yamazaki et al. 2013, J.G.R.]. We force this ensemble with several idealised climate-forcing scenarios including carbon dioxide step and transient profiles, solar radiation management geoengineering experiments with stratospheric aerosols, and short-lived climate forcing agents. We will present the results from several of these forcing scenarios under GCM parametric uncertainty. We examine the global mean precipitation energy budget to understand the robustness of a simple non-linear global precipitation model [Good et al. 2012, Clim. Dyn.] as a better explanation of precipitation changes in transient climate projections under GCM parametric uncertainty than a simple linear tropospheric energy balance model. We will also present work investigating robust conclusions about precipitation changes in a balanced ensemble of idealised solar radiation management scenarios [Kravitz et al. 2011, Atmos. Sci. Let.].
Statistical evaluation of the influence of the uncertainty budget on B-spline curve approximation
NASA Astrophysics Data System (ADS)
Zhao, Xin; Alkhatib, Hamza; Kargoll, Boris; Neumann, Ingo
2017-12-01
In the field of engineering geodesy, terrestrial laser scanning (TLS) has become a popular method for detecting deformations. This paper analyzes the influence of the uncertainty budget on free-form curves modeled by B-splines. Usually, free-form estimation is based on scanning points assumed to have equal accuracies, which is not realistic. Previous findings demonstrate that the residuals still contain random and systematic uncertainties caused by instrumental, object-related and atmospheric influences. In order to guarantee the quality of derived estimates, it is essential to be aware of all uncertainties and their impact on the estimation. In this paper, a more detailed uncertainty budget is considered, in the context of the "Guide to the Expression of Uncertainty in Measurement" (GUM), which leads to a refined, heteroskedastic variance covariance matrix (VCM) of TLS measurements. Furthermore, the control points of B-spline curves approximating a measured bridge are estimated. Comparisons are made between the estimated B-spline curves using on the one hand a homoskedastic VCM and on the other hand the refined VCM. To assess the statistical significance of the differences displayed by the estimates for the two stochastic models, a nested model misspecification test and a non-nested model selection test are described and applied. The test decisions indicate that the homoskedastic VCM should be replaced by a heteroskedastic VCM in the direction of the suggested VCM. However, the tests also indicate that the considered VCM is still inadequate in light of the given data set and should therefore be improved.
Howell, J.E.; Moore, C.T.; Conroy, M.J.; Hamrick, R.G.; Cooper, R.J.; Thackston, R.E.; Carroll, J.P.
2009-01-01
Large-scale habitat enhancement programs for birds are becoming more widespread, however, most lack monitoring to resolve uncertainties and enhance program impact over time. Georgia?s Bobwhite Quail Initiative (BQI) is a competitive, proposal-based system that provides incentives to landowners to establish habitat for northern bobwhites (Colinus virginianus). Using data from monitoring conducted in the program?s first years (1999?2001), we developed alternative hierarchical models to predict bobwhite abundance in response to program habitat modifications on local and regional scales. Effects of habitat and habitat management on bobwhite population response varied among geographical scales, but high measurement variability rendered the specific nature of these scaled effects equivocal. Under some models, BQI had positive impact at both local farm scales (1, 9 km2), particularly when practice acres were clustered, whereas other credible models indicated that bird response did not depend on spatial arrangement of practices. Thus, uncertainty about landscape-level effects of management presents a challenge to program managers who must decide which proposals to accept. We demonstrate that optimal selection decisions can be made despite this uncertainty and that uncertainty can be reduced over time, with consequent improvement in management efficacy. However, such an adaptive approach to BQI program implementation would require the reestablishment of monitoring of bobwhite abundance, an effort for which funding was discontinued in 2002. For landscape-level conservation programs generally, our approach demonstrates the value in assessing multiple scales of impact of habitat modification programs, and it reveals the utility of addressing management uncertainty through multiple decision models and system monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Chanyoung; Kim, Nam H.
Structural elements, such as stiffened panels and lap joints, are basic components of aircraft structures. For aircraft structural design, designers select predesigned elements satisfying the design load requirement based on their load-carrying capabilities. Therefore, estimation of safety envelope of structural elements for load tolerances would be a good investment for design purpose. In this article, a method of estimating safety envelope is presented using probabilistic classification, which can estimate a specific level of failure probability under both aleatory and epistemic uncertainties. An important contribution of this article is that the calculation uncertainty is reflected in building a safety envelope usingmore » Gaussian process, and the effect of element test data on reducing the calculation uncertainty is incorporated by updating the Gaussian process model with the element test data. It is shown that even one element test can significantly reduce the calculation uncertainty due to lacking knowledge of actual physics, so that conservativeness in a safety envelope is significantly reduced. The proposed approach was demonstrated with a cantilever beam example, which represents a structural element. The example shows that calculation uncertainty provides about 93% conservativeness against the uncertainty due to a few element tests. As a result, it is shown that even a single element test can increase the load tolerance modeled with the safety envelope by 20%.« less
NASA Astrophysics Data System (ADS)
Engeland, Kolbjørn; Steinsland, Ingelin; Johansen, Stian Solvang; Petersen-Øverleir, Asgeir; Kolberg, Sjur
2016-05-01
In this study, we explore the effect of uncertainty and poor observation quality on hydrological model calibration and predictions. The Osali catchment in Western Norway was selected as case study and an elevation distributed HBV-model was used. We systematically evaluated the effect of accounting for uncertainty in parameters, precipitation input, temperature input and streamflow observations. For precipitation and temperature we accounted for the interpolation uncertainty, and for streamflow we accounted for rating curve uncertainty. Further, the effects of poorer quality of precipitation input and streamflow observations were explored. Less information about precipitation was obtained by excluding the nearest precipitation station from the analysis, while reduced information about the streamflow was obtained by omitting the highest and lowest streamflow observations when estimating the rating curve. The results showed that including uncertainty in the precipitation and temperature inputs has a negligible effect on the posterior distribution of parameters and for the Nash-Sutcliffe (NS) efficiency for the predicted flows, while the reliability and the continuous rank probability score (CRPS) improves. Less information in precipitation input resulted in a shift in the water balance parameter Pcorr, a model producing smoother streamflow predictions, giving poorer NS and CRPS, but higher reliability. The effect of calibrating the hydrological model using streamflow observations based on different rating curves is mainly seen as variability in the water balance parameter Pcorr. When evaluating predictions, the best evaluation scores were not achieved for the rating curve used for calibration, but for rating curves giving smoother streamflow observations. Less information in streamflow influenced the water balance parameter Pcorr, and increased the spread in evaluation scores by giving both better and worse scores.
NASA Astrophysics Data System (ADS)
Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.
2016-12-01
This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for improving the model physics parameterizations.
NASA Astrophysics Data System (ADS)
Liang, Zhongmin; Li, Yujie; Hu, Yiming; Li, Binquan; Wang, Jun
2017-06-01
Accurate and reliable long-term forecasting plays an important role in water resources management and utilization. In this paper, a hybrid model called SVR-HUP is presented to predict long-term runoff and quantify the prediction uncertainty. The model is created based on three steps. First, appropriate predictors are selected according to the correlations between meteorological factors and runoff. Second, a support vector regression (SVR) model is structured and optimized based on the LibSVM toolbox and a genetic algorithm. Finally, using forecasted and observed runoff, a hydrologic uncertainty processor (HUP) based on a Bayesian framework is used to estimate the posterior probability distribution of the simulated values, and the associated uncertainty of prediction was quantitatively analyzed. Six precision evaluation indexes, including the correlation coefficient (CC), relative root mean square error (RRMSE), relative error (RE), mean absolute percentage error (MAPE), Nash-Sutcliffe efficiency (NSE), and qualification rate (QR), are used to measure the prediction accuracy. As a case study, the proposed approach is applied in the Han River basin, South Central China. Three types of SVR models are established to forecast the monthly, flood season and annual runoff volumes. The results indicate that SVR yields satisfactory accuracy and reliability at all three scales. In addition, the results suggest that the HUP cannot only quantify the uncertainty of prediction based on a confidence interval but also provide a more accurate single value prediction than the initial SVR forecasting result. Thus, the SVR-HUP model provides an alternative method for long-term runoff forecasting.
Zhu, Ming-Liang; Zhang, Qing-Hang; Lupton, Colin; Tong, Jie
2016-04-01
The measurement uncertainty of strains has been assessed in a bone analogue (sawbone), bovine trabecular bone and bone-cement interface specimens under zero load using the Digital Volume Correlation (DVC) method. The effects of sub-volume size, sample constraint and preload on the measured strain uncertainty have been examined. There is generally a trade-off between the measurement uncertainty and the spatial resolution. Suitable sub-volume sizes have been be selected based on a compromise between the measurement uncertainty and the spatial resolution of the cases considered. A ratio of sub-volume size to a microstructure characteristic (Tb.Sp) was introduced to reflect a suitable spatial resolution, and the measurement uncertainty associated was assessed. Specifically, ratios between 1.6 and 4 appear to give rise to standard deviations in the measured strains between 166 and 620 με in all the cases considered, which would seem to suffice for strain analysis in pre as well as post yield loading regimes. A microscale finite element (μFE) model was built from the CT images of the sawbone, and the results from the μFE model and a continuum FE model were compared with those from the DVC. The strain results were found to differ significantly between the two methods at tissue level, consistent in trend with the results found in human bones, indicating mainly a limitation of the current DVC method in mapping strains at this level. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bayesian multimodel inference for dose-response studies
Link, W.A.; Albers, P.H.
2007-01-01
Statistical inference in dose?response studies is model-based: The analyst posits a mathematical model of the relation between exposure and response, estimates parameters of the model, and reports conclusions conditional on the model. Such analyses rarely include any accounting for the uncertainties associated with model selection. The Bayesian inferential system provides a convenient framework for model selection and multimodel inference. In this paper we briefly describe the Bayesian paradigm and Bayesian multimodel inference. We then present a family of models for multinomial dose?response data and apply Bayesian multimodel inferential methods to the analysis of data on the reproductive success of American kestrels (Falco sparveriuss) exposed to various sublethal dietary concentrations of methylmercury.
NASA Astrophysics Data System (ADS)
Bobovnik, G.; Kutin, J.; Bajsić, I.
2016-08-01
This paper deals with an uncertainty analysis of gas flow measurements using a compact, high-speed, clearance-sealed realization of a piston prover. A detailed methodology for the uncertainty analysis, covering the components due to the gas density, dimensional and time measurements, the leakage flow, the density correction factor and the repeatability, is presented. The paper also deals with the selection of the isothermal and adiabatic measurement models, the treatment of the leakage flow and discusses the need for averaging multiple consecutive readings of the piston prover. The analysis is prepared for the flow range (50 000:1) covered by the three interchangeable flow cells. The results show that using the adiabatic measurement model and averaging the multiple readings, the estimated expanded measurement uncertainty of the gas mass flow rate is less than 0.15% in the flow range above 0.012 g min-1, whereas it increases for lower mass flow rates due to the leakage flow related effects. At the upper end of the measuring range, using the adiabatic instead of the isothermal measurement model, as well as averaging multiple readings, proves important.
Absolute Ages and Distances of 22 GCs Using Monte Carlo Main-sequence Fitting
NASA Astrophysics Data System (ADS)
O'Malley, Erin M.; Gilligan, Christina; Chaboyer, Brian
2017-04-01
The recent Gaia Data Release 1 of stellar parallaxes provides ample opportunity to find metal-poor main-sequence stars with precise parallaxes. We select 21 such stars with parallax uncertainties better than σ π /π ≤ 0.10 and accurate abundance determinations suitable for testing metal-poor stellar evolution models and determining the distance to Galactic globular clusters (GCs). A Monte Carlo analysis was used, taking into account uncertainties in the model construction parameters, to generate stellar models and isochrones to fit to the calibration stars. The isochrones that fit the calibration stars best were then used to determine the distances and ages of 22 GCs with metallicities ranging from -2.4 dex to -0.7 dex. We find distances with an average uncertainty of 0.15 mag and absolute ages ranging from 10.8 to 13.6 Gyr with an average uncertainty of 1.6 Gyr. Using literature proper motion data, we calculate orbits for the clusters, finding six that reside within the Galactic disk/bulge, while the rest are considered halo clusters. We find no strong evidence for a relationship between age and Galactocentric distance, but we do find a decreasing age-[Fe/H] relation.
Bridging groundwater models and decision support with a Bayesian network
Fienen, Michael N.; Masterson, John P.; Plant, Nathaniel G.; Gutierrez, Benjamin T.; Thieler, E. Robert
2013-01-01
Resource managers need to make decisions to plan for future environmental conditions, particularly sea level rise, in the face of substantial uncertainty. Many interacting processes factor in to the decisions they face. Advances in process models and the quantification of uncertainty have made models a valuable tool for this purpose. Long-simulation runtimes and, often, numerical instability make linking process models impractical in many cases. A method for emulating the important connections between model input and forecasts, while propagating uncertainty, has the potential to provide a bridge between complicated numerical process models and the efficiency and stability needed for decision making. We explore this using a Bayesian network (BN) to emulate a groundwater flow model. We expand on previous approaches to validating a BN by calculating forecasting skill using cross validation of a groundwater model of Assateague Island in Virginia and Maryland, USA. This BN emulation was shown to capture the important groundwater-flow characteristics and uncertainty of the groundwater system because of its connection to island morphology and sea level. Forecast power metrics associated with the validation of multiple alternative BN designs guided the selection of an optimal level of BN complexity. Assateague island is an ideal test case for exploring a forecasting tool based on current conditions because the unique hydrogeomorphological variability of the island includes a range of settings indicative of past, current, and future conditions. The resulting BN is a valuable tool for exploring the response of groundwater conditions to sea level rise in decision support.
Fitts Cochrane, Jean; Lonsdorf, Eric; Allison, Taber D; Sanders-Reed, Carol A
2015-09-01
Challenges arise when renewable energy development triggers "no net loss" policies for protected species, such as where wind energy facilities affect Golden Eagles in the western United States. When established mitigation approaches are insufficient to fully avoid or offset losses, conservation goals may still be achievable through experimental implementation of unproven mitigation methods provided they are analyzed within a framework that deals transparently and rigorously with uncertainty. We developed an approach to quantify and analyze compensatory mitigation that (1) relies on expert opinion elicited in a thoughtful and structured process to design the analysis (models) and supplement available data, (2) builds computational models as hypotheses about cause-effect relationships, (3) represents scientific uncertainty in stochastic model simulations, (4) provides probabilistic predictions of "relative" mortality with and without mitigation, (5) presents results in clear formats useful to applying risk management preferences (regulatory standards) and selecting strategies and levels of mitigation for immediate action, and (6) defines predictive parameters in units that could be monitored effectively, to support experimental adaptive management and reduction in uncertainty. We illustrate the approach with a case study characterized by high uncertainty about underlying biological processes and high conservation interest: estimating the quantitative effects of voluntary strategies to abate lead poisoning in Golden Eagles in Wyoming due to ingestion of spent game hunting ammunition.
The doctor-patient relationship as a toolkit for uncertain clinical decisions.
Diamond-Brown, Lauren
2016-06-01
Medical uncertainty is a well-recognized problem in healthcare, yet how doctors make decisions in the face of uncertainty remains to be understood. This article draws on interdisciplinary literature on uncertainty and physician decision-making to examine a specific physician response to uncertainty: using the doctor-patient relationship as a toolkit. Additionally, I ask what happens to this process when the doctor-patient relationship becomes fragmented. I answer these questions by examining obstetrician-gynecologists' narratives regarding how they make decisions when faced with uncertainty in childbirth. Between 2013 and 2014, I performed 21 semi-structured interviews with obstetricians in the United States. Obstetricians were selected to maximize variation in relevant physician, hospital, and practice characteristics. I began with grounded theory and moved to analytical coding of themes in relation to relevant literature. My analysis renders it evident that some physicians use the doctor-patient relationship as a toolkit for dealing with uncertainty. I analyze how this process varies for physicians in different models of care by comparing doctors' experiences in models with continuous versus fragmented doctor-patient relationships. My key findings are that obstetricians in both models appealed to the ideal of patient-centered decision-making to cope with uncertain decisions, but in practice physicians in fragmented care faced a number of challenges to using the doctor-patient relationship as a toolkit for decision-making. These challenges led to additional uncertainties and in some cases to poor outcomes for doctors and/or patients; they also raised concerns about the reproduction of inequality. Thus organization of care delivery mitigates the efficacy of doctors' use of the doctor-patient relationship toolkit for uncertain decisions. These findings have implications for theorizing about decision-making under conditions of medical uncertainty, for understanding how the doctor-patient relationship and model of care affect physician decision-making, and for forming policy on the optimal structure of medical work. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kikuchi, C.; Ferre, P. A.; Vrugt, J. A.
2011-12-01
Hydrologic models are developed, tested, and refined based on the ability of those models to explain available hydrologic data. The optimization of model performance based upon mismatch between model outputs and real world observations has been extensively studied. However, identification of plausible models is sensitive not only to the models themselves - including model structure and model parameters - but also to the location, timing, type, and number of observations used in model calibration. Therefore, careful selection of hydrologic observations has the potential to significantly improve the performance of hydrologic models. In this research, we seek to reduce prediction uncertainty through optimization of the data collection process. A new tool - multiple model analysis with discriminatory data collection (MMA-DDC) - was developed to address this challenge. In this approach, multiple hydrologic models are developed and treated as competing hypotheses. Potential new data are then evaluated on their ability to discriminate between competing hypotheses. MMA-DDC is well-suited for use in recursive mode, in which new observations are continuously used in the optimization of subsequent observations. This new approach was applied to a synthetic solute transport experiment, in which ranges of parameter values constitute the multiple hydrologic models, and model predictions are calculated using likelihood-weighted model averaging. MMA-DDC was used to determine the optimal location, timing, number, and type of new observations. From comparison with an exhaustive search of all possible observation sequences, we find that MMA-DDC consistently selects observations which lead to the highest reduction in model prediction uncertainty. We conclude that using MMA-DDC to evaluate potential observations may significantly improve the performance of hydrologic models while reducing the cost associated with collecting new data.
2011-05-01
selected results .......................................................................... 101 Appendix B. Scientific Publications ...159 Scientific Publications ...are being achieved. Thus, public review (and political tradeoffs) can be incorporated in choosing short-term management strategies, but ultimate
CANCER RISK ASSESSMENTS (RA.D.1D)
Risk assessments are based on questions that the assessor asks about scientific information that is relevant to human and/or environmental risk. The risk characterization also provides an evaluation of the assumptions, uncertainties, and selection of studies and models used in th...
Uncertainty, imprecision, and the precautionary principle in climate change assessment.
Borsuk, M E; Tomassini, L
2005-01-01
Statistical decision theory can provide useful support for climate change decisions made under conditions of uncertainty. However, the probability distributions used to calculate expected costs in decision theory are themselves subject to uncertainty, disagreement, or ambiguity in their specification. This imprecision can be described using sets of probability measures, from which upper and lower bounds on expectations can be calculated. However, many representations, or classes, of probability measures are possible. We describe six of the more useful classes and demonstrate how each may be used to represent climate change uncertainties. When expected costs are specified by bounds, rather than precise values, the conventional decision criterion of minimum expected cost is insufficient to reach a unique decision. Alternative criteria are required, and the criterion of minimum upper expected cost may be desirable because it is consistent with the precautionary principle. Using simple climate and economics models as an example, we determine the carbon dioxide emissions levels that have minimum upper expected cost for each of the selected classes. There can be wide differences in these emissions levels and their associated costs, emphasizing the need for care when selecting an appropriate class.
Robust Decision-making Applied to Model Selection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemez, Francois M.
2012-08-06
The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define eachmore » of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.« less
Climate Change Impacts at Department of Defense
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotamarthi, Rao; Wang, Jiali; Zoebel, Zach
This project is aimed at providing the U.S. Department of Defense (DoD) with a comprehensive analysis of the uncertainty associated with generating climate projections at the regional scale that can be used by stakeholders and decision makers to quantify and plan for the impacts of future climate change at specific locations. The merits and limitations of commonly used downscaling models, ranging from simple to complex, are compared, and their appropriateness for application at installation scales is evaluated. Downscaled climate projections are generated at selected DoD installations using dynamic and statistical methods with an emphasis on generating probability distributions of climatemore » variables and their associated uncertainties. The sites selection and selection of variables and parameters for downscaling was based on a comprehensive understanding of the current and projected roles that weather and climate play in operating, maintaining, and planning DoD facilities and installations.« less
Integrated watershed-scale response to climate change for selected basins across the United States
Markstrom, Steven L.; Hay, Lauren E.; Ward-Garrison, D. Christian; Risley, John C.; Battaglin, William A.; Bjerklie, David M.; Chase, Katherine J.; Christiansen, Daniel E.; Dudley, Robert W.; Hunt, Randall J.; Koczot, Kathryn M.; Mastin, Mark C.; Regan, R. Steven; Viger, Roland J.; Vining, Kevin C.; Walker, John F.
2012-01-01
A study by the U.S. Geological Survey (USGS) evaluated the hydrologic response to different projected carbon emission scenarios of the 21st century using a hydrologic simulation model. This study involved five major steps: (1) setup, calibrate and evaluated the Precipitation Runoff Modeling System (PRMS) model in 14 basins across the United States by local USGS personnel; (2) acquire selected simulated carbon emission scenarios from the World Climate Research Programme's Coupled Model Intercomparison Project; (3) statistical downscaling of these scenarios to create PRMS input files which reflect the future climatic conditions of these scenarios; (4) generate PRMS projections for the carbon emission scenarios for the 14 basins; and (5) analyze the modeled hydrologic response. This report presents an overview of this study, details of the methodology, results from the 14 basin simulations, and interpretation of these results. A key finding is that the hydrological response of the different geographical regions of the United States to potential climate change may be different, depending on the dominant physical processes of that particular region. Also considered is the tremendous amount of uncertainty present in the carbon emission scenarios and how this uncertainty propagates through the hydrologic simulations.
NASA Astrophysics Data System (ADS)
Honti, Mark; Schuwirth, Nele; Rieckermann, Jörg; Stamm, Christian
2017-03-01
The design and evaluation of solutions for integrated surface water quality management requires an integrated modelling approach. Integrated models have to be comprehensive enough to cover the aspects relevant for management decisions, allowing for mapping of larger-scale processes such as climate change to the regional and local contexts. Besides this, models have to be sufficiently simple and fast to apply proper methods of uncertainty analysis, covering model structure deficits and error propagation through the chain of sub-models. Here, we present a new integrated catchment model satisfying both conditions. The conceptual iWaQa
model was developed to support the integrated management of small streams. It can be used to predict traditional water quality parameters, such as nutrients and a wide set of organic micropollutants (plant and material protection products), by considering all major pollutant pathways in urban and agricultural environments. Due to its simplicity, the model allows for a full, propagative analysis of predictive uncertainty, including certain structural and input errors. The usefulness of the model is demonstrated by predicting future surface water quality in a small catchment with mixed land use in the Swiss Plateau. We consider climate change, population growth or decline, socio-economic development, and the implementation of management strategies to tackle urban and agricultural point and non-point sources of pollution. Our results indicate that input and model structure uncertainties are the most influential factors for certain water quality parameters. In these cases model uncertainty is already high for present conditions. Nevertheless, accounting for today's uncertainty makes management fairly robust to the foreseen range of potential changes in the next decades. The assessment of total predictive uncertainty allows for selecting management strategies that show small sensitivity to poorly known boundary conditions. The identification of important sources of uncertainty helps to guide future monitoring efforts and pinpoints key indicators, whose evolution should be closely followed to adapt management. The possible impact of climate change is clearly demonstrated by water quality substantially changing depending on single climate model chains. However, when all climate trajectories are combined, the human land use and management decisions have a larger influence on water quality against a time horizon of 2050 in the study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sung, Yixing; Adams, Brian M.; Witkowski, Walter R.
2011-04-01
The CASL Level 2 Milestone VUQ.Y1.03, 'Enable statistical sensitivity and UQ demonstrations for VERA,' was successfully completed in March 2011. The VUQ focus area led this effort, in close partnership with AMA, and with support from VRI. DAKOTA was coupled to VIPRE-W thermal-hydraulics simulations representing reactors of interest to address crud-related challenge problems in order to understand the sensitivity and uncertainty in simulation outputs with respect to uncertain operating and model form parameters. This report summarizes work coupling the software tools, characterizing uncertainties, selecting sensitivity and uncertainty quantification algorithms, and analyzing the results of iterative studies. These demonstration studies focusedmore » on sensitivity and uncertainty of mass evaporation rate calculated by VIPRE-W, a key predictor for crud-induced power shift (CIPS).« less
How multiple causes combine: independence constraints on causal inference.
Liljeholm, Mimi
2015-01-01
According to the causal power view, two core constraints-that causes occur independently (i.e., no confounding) and influence their effects independently-serve as boundary conditions for causal induction. This study investigated how violations of these constraints modulate uncertainty about the existence and strength of a causal relationship. Participants were presented with pairs of candidate causes that were either confounded or not, and that either interacted or exerted their influences independently. Consistent with the causal power view, uncertainty about the existence and strength of causal relationships was greater when causes were confounded or interacted than when unconfounded and acting independently. An elemental Bayesian causal model captured differences in uncertainty due to confounding but not those due to an interaction. Implications of distinct sources of uncertainty for the selection of contingency information and causal generalization are discussed.
Implications of allometric model selection for county-level biomass mapping
Laura Duncanson; Wenli Huang; Kristofer Johnson; Anu Swatantran; Ronald E. McRoberts; Ralph Dubayah
2017-01-01
Background: Carbon accounting in forests remains a large area of uncertainty in the global carbon cycle. Forest aboveground biomass is therefore an attribute of great interest for the forest management community, but the accuracy of aboveground biomass maps depends on the accuracy of the underlying field estimates used to calibrate models. These field estimates depend...
Characterizing Epistemic Uncertainty for Launch Vehicle Designs
NASA Technical Reports Server (NTRS)
Novack, Steven D.; Rogers, Jim; Al Hassan, Mohammad; Hark, Frank
2016-01-01
NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk, and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results, and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods, such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty, are rendered obsolete, since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods. This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper describes how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.
Characterizing Epistemic Uncertainty for Launch Vehicle Designs
NASA Technical Reports Server (NTRS)
Novack, Steven D.; Rogers, Jim; Hark, Frank; Al Hassan, Mohammad
2016-01-01
NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty are rendered obsolete since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods.This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper shows how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.
Development of a Prototype Decision Support System to Manage the Air Force Alternative Care Program
1990-09-01
development model was selected to structure the development process. Since it is necessary to ensure...uncertainty. Furthermore, the SDLC model provides a specific framework "by which an application is conceived, developed , and implemented" (Davis and Olson...associated with the automation of the manual ACP procedures. The SDLC Model has three stages: (1) definition, (2) development , and (3) installation
NASA Technical Reports Server (NTRS)
Brewin, Robert J.W.; Sathyendranath, Shubha; Muller, Dagmar; Brockmann, Carsten; Deschamps, Pierre-Yves; Devred, Emmanuel; Doerffer, Roland; Fomferra, Norman; Franz, Bryan; Grant, Mike;
2013-01-01
Satellite-derived remote-sensing reflectance (Rrs) can be used for mapping biogeochemically relevant variables, such as the chlorophyll concentration and the Inherent Optical Properties (IOPs) of the water, at global scale for use in climate-change studies. Prior to generating such products, suitable algorithms have to be selected that are appropriate for the purpose. Algorithm selection needs to account for both qualitative and quantitative requirements. In this paper we develop an objective methodology designed to rank the quantitative performance of a suite of bio-optical models. The objective classification is applied using the NASA bio-Optical Marine Algorithm Dataset (NOMAD). Using in situ Rrs as input to the models, the performance of eleven semianalytical models, as well as five empirical chlorophyll algorithms and an empirical diffuse attenuation coefficient algorithm, is ranked for spectrally-resolved IOPs, chlorophyll concentration and the diffuse attenuation coefficient at 489 nm. The sensitivity of the objective classification and the uncertainty in the ranking are tested using a Monte-Carlo approach (bootstrapping). Results indicate that the performance of the semi-analytical models varies depending on the product and wavelength of interest. For chlorophyll retrieval, empirical algorithms perform better than semi-analytical models, in general. The performance of these empirical models reflects either their immunity to scale errors or instrument noise in Rrs data, or simply that the data used for model parameterisation were not independent of NOMAD. Nonetheless, uncertainty in the classification suggests that the performance of some semi-analytical algorithms at retrieving chlorophyll is comparable with the empirical algorithms. For phytoplankton absorption at 443 nm, some semi-analytical models also perform with similar accuracy to an empirical model. We discuss the potential biases, limitations and uncertainty in the approach, as well as additional qualitative considerations for algorithm selection for climate-change studies. Our classification has the potential to be routinely implemented, such that the performance of emerging algorithms can be compared with existing algorithms as they become available. In the long-term, such an approach will further aid algorithm development for ocean-colour studies.
Posada, David; Buckley, Thomas R
2004-10-01
Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (model-averaged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AIC-based model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus(genus Carabus) ground beetles described by Sota and Vogler (2001).
Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping
2018-01-01
An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness (τ), effective radius (reff), and cloud-top height (h). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary datasets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that, for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available. PMID:29707470
NASA Technical Reports Server (NTRS)
Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping
2016-01-01
An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness (tau), effective radius (r(sub eff)), and cloud-top height (h). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary datasets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that, for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available.
NASA Technical Reports Server (NTRS)
Wang, Chenxi; Platnick, Steven; Zhang, Zhibo; Meyer, Kerry; Yang, Ping
2016-01-01
An optimal estimation (OE) retrieval method is developed to infer three ice cloud properties simultaneously: optical thickness (tau), effective radius (r(sub eff)), and cloud top height (h). This method is based on a fast radiative transfer (RT) model and infrared (IR) observations from the MODerate resolution Imaging Spectroradiometer (MODIS). This study conducts thorough error and information content analyses to understand the error propagation and performance of retrievals from various MODIS band combinations under different cloud/atmosphere states. Specifically, the algorithm takes into account four error sources: measurement uncertainty, fast RT model uncertainty, uncertainties in ancillary data sets (e.g., atmospheric state), and assumed ice crystal habit uncertainties. It is found that the ancillary and ice crystal habit error sources dominate the MODIS IR retrieval uncertainty and cannot be ignored. The information content analysis shows that for a given ice cloud, the use of four MODIS IR observations is sufficient to retrieve the three cloud properties. However, the selection of MODIS IR bands that provide the most information and their order of importance varies with both the ice cloud properties and the ambient atmospheric and the surface states. As a result, this study suggests the inclusion of all MODIS IR bands in practice since little a priori information is available.
NASA Astrophysics Data System (ADS)
Gong, L.
2013-12-01
Large-scale hydrological models and land surface models are by far the only tools for accessing future water resources in climate change impact studies. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited quality and availability of data, as well as model uncertainties. A new purely data-based scale-extrapolation method is proposed, to estimate water resources for a large basin solely from selected small sub-basins, which are typically two-orders-of-magnitude smaller than the large basin. Those small sub-basins contain sufficient information, not only on climate and land surface, but also on hydrological characteristics for the large basin In the Baltic Sea drainage basin, best discharge estimation for the gauged area was achieved with sub-basins that cover 2-4% of the gauged area. There exist multiple sets of sub-basins that resemble the climate and hydrology of the basin equally well. Those multiple sets estimate annual discharge for gauged area consistently well with 5% average error. The scale-extrapolation method is completely data-based; therefore it does not force any modelling error into the prediction. The multiple predictions are expected to bracket the inherent variations and uncertainties of the climate and hydrology of the basin. The method can be applied in both un-gauged basins and un-gauged periods with uncertainty estimation.
Natural flow regimes of the Ozark-Ouachita Interior Highlands region
Leasure, D. R.; Magoulick, Daniel D.; Longing, S. D.
2016-01-01
Natural flow regimes represent the hydrologic conditions to which native aquatic organisms are best adapted. We completed a regional river classification and quantitative descriptions of each natural flow regime for the Ozark–Ouachita Interior Highlands region of Arkansas, Missouri and Oklahoma. On the basis of daily flow records from 64 reference streams, seven natural flow regimes were identified with mixture model cluster analysis: Groundwater Stable, Groundwater, Groundwater Flashy, Perennial Runoff, Runoff Flashy, Intermittent Runoff and Intermittent Flashy. Sets of flow metrics were selected that best quantified nine ecologically important components of these natural flow regimes. An uncertainty analysis was performed to avoid selecting metrics strongly affected by measurement uncertainty that can result from short periods of record. Measurement uncertainties (bias, precision and accuracy) were assessed for 170 commonly used flow metrics. The ranges of variability expected for select flow metrics under natural conditions were quantified for each flow regime to provide a reference for future assessments of hydrologic alteration. A random forest model was used to predict the natural flow regimes of all stream segments in the study area based on climate and catchment characteristics, and a map was produced. The geographic distribution of flow regimes suggested distinct ecohydrological regions that may be useful for conservation planning. This project provides a hydrologic foundation for future examination of flow–ecology relationships in the region. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
Inexact Socio-Dynamic Modeling of Groundwater Contamination Management
NASA Astrophysics Data System (ADS)
Vesselinov, V. V.; Zhang, X.
2015-12-01
Groundwater contamination may alter the behaviors of the public such as adaptation to such a contamination event. On the other hand, social behaviors may affect groundwater contamination and associated risk levels such as through changing ingestion amount of groundwater due to the contamination. Decisions should consider not only the contamination itself, but also social attitudes on such contamination events. Such decisions are inherently associated with uncertainty, such as subjective judgement from decision makers and their implicit knowledge on selection of whether to supply water or reduce the amount of supplied water under the scenario of the contamination. A socio-dynamic model based on the theories of information-gap and fuzzy sets is being developed to address the social behaviors facing the groundwater contamination and applied to a synthetic problem designed based on typical groundwater remediation sites where the effects of social behaviors on decisions are investigated and analyzed. Different uncertainties including deep uncertainty and vague/ambiguous uncertainty are effectively and integrally addressed. The results can provide scientifically-defensible decision supports for groundwater management in face of the contamination.
NASA Astrophysics Data System (ADS)
Peel, M. C.; Srikanthan, R.; McMahon, T. A.; Karoly, D. J.
2015-04-01
Two key sources of uncertainty in projections of future runoff for climate change impact assessments are uncertainty between global climate models (GCMs) and within a GCM. Within-GCM uncertainty is the variability in GCM output that occurs when running a scenario multiple times but each run has slightly different, but equally plausible, initial conditions. The limited number of runs available for each GCM and scenario combination within the Coupled Model Intercomparison Project phase 3 (CMIP3) and phase 5 (CMIP5) data sets, limits the assessment of within-GCM uncertainty. In this second of two companion papers, the primary aim is to present a proof-of-concept approximation of within-GCM uncertainty for monthly precipitation and temperature projections and to assess the impact of within-GCM uncertainty on modelled runoff for climate change impact assessments. A secondary aim is to assess the impact of between-GCM uncertainty on modelled runoff. Here we approximate within-GCM uncertainty by developing non-stationary stochastic replicates of GCM monthly precipitation and temperature data. These replicates are input to an off-line hydrologic model to assess the impact of within-GCM uncertainty on projected annual runoff and reservoir yield. We adopt stochastic replicates of available GCM runs to approximate within-GCM uncertainty because large ensembles, hundreds of runs, for a given GCM and scenario are unavailable, other than the Climateprediction.net data set for the Hadley Centre GCM. To date within-GCM uncertainty has received little attention in the hydrologic climate change impact literature and this analysis provides an approximation of the uncertainty in projected runoff, and reservoir yield, due to within- and between-GCM uncertainty of precipitation and temperature projections. In the companion paper, McMahon et al. (2015) sought to reduce between-GCM uncertainty by removing poorly performing GCMs, resulting in a selection of five better performing GCMs from CMIP3 for use in this paper. Here we present within- and between-GCM uncertainty results in mean annual precipitation (MAP), mean annual temperature (MAT), mean annual runoff (MAR), the standard deviation of annual precipitation (SDP), standard deviation of runoff (SDR) and reservoir yield for five CMIP3 GCMs at 17 worldwide catchments. Based on 100 stochastic replicates of each GCM run at each catchment, within-GCM uncertainty was assessed in relative form as the standard deviation expressed as a percentage of the mean of the 100 replicate values of each variable. The average relative within-GCM uncertainties from the 17 catchments and 5 GCMs for 2015-2044 (A1B) were MAP 4.2%, SDP 14.2%, MAT 0.7%, MAR 10.1% and SDR 17.6%. The Gould-Dincer Gamma (G-DG) procedure was applied to each annual runoff time series for hypothetical reservoir capacities of 1 × MAR and 3 × MAR and the average uncertainties in reservoir yield due to within-GCM uncertainty from the 17 catchments and 5 GCMs were 25.1% (1 × MAR) and 11.9% (3 × MAR). Our approximation of within-GCM uncertainty is expected to be an underestimate due to not replicating the GCM trend. However, our results indicate that within-GCM uncertainty is important when interpreting climate change impact assessments. Approximately 95% of values of MAP, SDP, MAT, MAR, SDR and reservoir yield from 1 × MAR or 3 × MAR capacity reservoirs are expected to fall within twice their respective relative uncertainty (standard deviation/mean). Within-GCM uncertainty has significant implications for interpreting climate change impact assessments that report future changes within our range of uncertainty for a given variable - these projected changes may be due solely to within-GCM uncertainty. Since within-GCM variability is amplified from precipitation to runoff and then to reservoir yield, climate change impact assessments that do not take into account within-GCM uncertainty risk providing water resources management decision makers with a sense of certainty that is unjustified.
Uncertainty assessment of future land use in Brazil under increasing demand for bioenergy
NASA Astrophysics Data System (ADS)
van der Hilst, F.; Verstegen, J. A.; Karssenberg, D.; Faaij, A.
2013-12-01
Environmental impacts of a future increase in demand for bioenergy depend on the magnitude, location and pattern of the direct and indirect land use change of energy cropland expansion. Here we aim at 1) projecting the spatio-temporal pattern of sugar cane expansion and the effect on other land uses in Brazil towards 2030, and 2) assessing the uncertainty herein. For the spatio-temporal projection, three model components are used: 1) an initial land use map that shows the initial amount and location of sugar cane and all other relevant land use classes in the system, 2) a model to project the quantity of change of all land uses, and 3) a spatially explicit land use model that determines the location of change of all land uses. All three model components are sources of uncertainty, which is quantified by defining error models for all components and their inputs and propagating these errors through the chain of components. No recent accurate land use map is available for Brazil, so municipal census data and the global land cover map GlobCover are combined to create the initial land use map. The census data are disaggregated stochastically using GlobCover as a probability surface, to obtain a stochastic land use raster map for 2006. Since bioenergy is a global market, the quantity of change in sugar cane in Brazil depends on dynamics in both Brazil itself and other parts of the world. Therefore, a computable general equilibrium (CGE) model, MAGNET, is run to produce a time series of the relative change of all land uses given an increased future demand for bioenergy. A sensitivity analysis finds the upper and lower boundaries hereof, to define this component's error model. An initial selection of drivers of location for each land use class is extracted from literature. Using a Bayesian data assimilation technique and census data from 2007 to 2011 as observational data, the model is identified, meaning that the final selection and optimal relative importance of the drivers of location are determined. The data assimilation technique takes into account uncertainty in the observational data and yields a stochastic representation of the identified model. Using all stochastic inputs, this land use change model is run to find at which locations the future land use changes occur and to quantify the associated uncertainty. The results indicate that in the initial land use map especially the locations of pastures are uncertain. Since the dynamics in the livestock sector play a major role in the land use development of Brazil, the effect of this uncertainty on the model output is large. Results of the data assimilation indicate that the drivers of location of the land uses vary over time (variations up to 50% in the importance of the drivers) making it difficult to find a solid stationary system representation. Overall, we conclude that projection up to 2030 is only of use for quantifying impacts that act on a larger aggregation level, because at local level uncertainty is too large.
Communicating spatial uncertainty to non-experts using R
NASA Astrophysics Data System (ADS)
Luzzi, Damiano; Sawicka, Kasia; Heuvelink, Gerard; de Bruin, Sytze
2016-04-01
Effective visualisation methods are important for the efficient use of uncertainty information for various groups of users. Uncertainty propagation analysis is often used with spatial environmental models to quantify the uncertainty within the information. A challenge arises when trying to effectively communicate the uncertainty information to non-experts (not statisticians) in a wide range of cases. Due to the growing popularity and applicability of the open source programming language R, we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. The package has implemented Monte Carlo algorithms for uncertainty propagation, the output of which is represented by an ensemble of model outputs (i.e. a sample from a probability distribution). Numerous visualisation methods exist that aim to present such spatial uncertainty information both statically, dynamically and interactively. To provide the most universal visualisation tools for non-experts, we conducted a survey on a group of 20 university students and assessed the effectiveness of selected static and interactive methods for visualising uncertainty in spatial variables such as DEM and land cover. The static methods included adjacent maps and glyphs for continuous variables. Both allow for displaying maps with information about the ensemble mean, variance/standard deviation and prediction intervals. Adjacent maps were also used for categorical data, displaying maps of the most probable class, as well as its associated probability. The interactive methods included a graphical user interface, which in addition to displaying the previously mentioned variables also allowed for comparison of joint uncertainties at multiple locations. The survey indicated that users could understand the basics of the uncertainty information displayed in the static maps, with the interactive interface allowing for more in-depth information. Subsequently, the R package included a collation of the plotting functions that were evaluated in the survey. The implementation of static visualisations was done via calls to the 'ggplot2' package. This allowed the user to provide control over the content, legend, colours, axes and titles. The interactive methods were implemented using the 'shiny' package allowing users to activate the visualisation of statistical descriptions of uncertainty through interaction with a plotted map of means. This research brings uncertainty visualisation to a broader audience through the development of tools for visualising uncertainty using open source software.
Uncertainty Analysis of Decomposing Polyurethane Foam
NASA Technical Reports Server (NTRS)
Hobbs, Michael L.; Romero, Vicente J.
2000-01-01
Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. Related polyurethane binders such as Estane are used in many energetic materials of interest to the JANNAF community. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state decomposition front velocity calculated as the derivative of the decomposition front location versus time. An analytical mean value sensitivity/uncertainty (MV) analysis was used to determine the standard deviation by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation was essentially determined from a second derivative that was extremely sensitive to numerical noise. To minimize the numerical noise, 50-micrometer element dimensions and approximately 1-msec time steps were required to obtain stable uncertainty results. As an alternative method to determine the uncertainty and sensitivity in the decomposition front velocity, surrogate response surfaces were generated for use with a constrained Latin Hypercube Sampling (LHS) technique. Two surrogate response surfaces were investigated: 1) a linear surrogate response surface (LIN) and 2) a quadratic response surface (QUAD). The LHS techniques do not require derivatives of the response variable and are subsequently relatively insensitive to numerical noise. To compare the LIN and QUAD methods to the MV method, a direct LHS analysis (DLHS) was performed using the full grid and timestep resolved finite element model. The surrogate response models (LIN and QUAD) are shown to give acceptable values of the mean and standard deviation when compared to the fully converged DLHS model.
Performance assessment of a Bayesian Forecasting System (BFS) for real-time flood forecasting
NASA Astrophysics Data System (ADS)
Biondi, D.; De Luca, D. L.
2013-02-01
SummaryThe paper evaluates, for a number of flood events, the performance of a Bayesian Forecasting System (BFS), with the aim of evaluating total uncertainty in real-time flood forecasting. The predictive uncertainty of future streamflow is estimated through the Bayesian integration of two separate processors. The former evaluates the propagation of input uncertainty on simulated river discharge, the latter computes the hydrological uncertainty of actual river discharge associated with all other possible sources of error. A stochastic model and a distributed rainfall-runoff model were assumed, respectively, for rainfall and hydrological response simulations. A case study was carried out for a small basin in the Calabria region (southern Italy). The performance assessment of the BFS was performed with adequate verification tools suited for probabilistic forecasts of continuous variables such as streamflow. Graphical tools and scalar metrics were used to evaluate several attributes of the forecast quality of the entire time-varying predictive distributions: calibration, sharpness, accuracy, and continuous ranked probability score (CRPS). Besides the overall system, which incorporates both sources of uncertainty, other hypotheses resulting from the BFS properties were examined, corresponding to (i) a perfect hydrological model; (ii) a non-informative rainfall forecast for predicting streamflow; and (iii) a perfect input forecast. The results emphasize the importance of using different diagnostic approaches to perform comprehensive analyses of predictive distributions, to arrive at a multifaceted view of the attributes of the prediction. For the case study, the selected criteria revealed the interaction of the different sources of error, in particular the crucial role of the hydrological uncertainty processor when compensating, at the cost of wider forecast intervals, for the unreliable and biased predictive distribution resulting from the Precipitation Uncertainty Processor.
The impact of lake and reservoir parameterization on global streamflow simulation.
Zajac, Zuzanna; Revilla-Romero, Beatriz; Salamon, Peter; Burek, Peter; Hirpa, Feyera A; Beck, Hylke
2017-05-01
Lakes and reservoirs affect the timing and magnitude of streamflow, and are therefore essential hydrological model components, especially in the context of global flood forecasting. However, the parameterization of lake and reservoir routines on a global scale is subject to considerable uncertainty due to lack of information on lake hydrographic characteristics and reservoir operating rules. In this study we estimated the effect of lakes and reservoirs on global daily streamflow simulations of a spatially-distributed LISFLOOD hydrological model. We applied state-of-the-art global sensitivity and uncertainty analyses for selected catchments to examine the effect of uncertain lake and reservoir parameterization on model performance. Streamflow observations from 390 catchments around the globe and multiple performance measures were used to assess model performance. Results indicate a considerable geographical variability in the lake and reservoir effects on the streamflow simulation. Nash-Sutcliffe Efficiency (NSE) and Kling-Gupta Efficiency (KGE) metrics improved for 65% and 38% of catchments respectively, with median skill score values of 0.16 and 0.2 while scores deteriorated for 28% and 52% of the catchments, with median values -0.09 and -0.16, respectively. The effect of reservoirs on extreme high flows was substantial and widespread in the global domain, while the effect of lakes was spatially limited to a few catchments. As indicated by global sensitivity analysis, parameter uncertainty substantially affected uncertainty of model performance. Reservoir parameters often contributed to this uncertainty, although the effect varied widely among catchments. The effect of reservoir parameters on model performance diminished with distance downstream of reservoirs in favor of other parameters, notably groundwater-related parameters and channel Manning's roughness coefficient. This study underscores the importance of accounting for lakes and, especially, reservoirs and using appropriate parameterization in large-scale hydrological simulations.
NASA Astrophysics Data System (ADS)
Smith, L. A.
2001-05-01
Many sources of uncertainty come into play when modelling geophysical systems by simulation. These include uncertainty in the initial condition, uncertainty in model parameter values (and the parameterisations themselves) and error in the model class from which the model(s) was selected. In recent decades, climate simulations have focused resources on reducing the last of these by including more and more details into the model. One can question when this ``kitchen sink'' approach should be complimented with realistic estimates of the impact from other uncertainties noted above. Indeed while the impact of model error can never be fully quantified, as all simulation experiments are interpreted a the rosy scenario which assumes a priori that nothing crucial is missing, the impact of other uncertainties can be quantified at only the cost of computational power; as illustrated, for example, in ensemble climate modelling experiments like Casino-21. This talk illustrates the interplay uncertainties in the context of a trivial nonlinear system and an ensemble of models. The simple systems considered in this small scale experiment, Keno-21, are meant to illustrate issues of experimental design; they are not intended to provide true climate simulations. The use of simulation models with huge numbers of parameters given limited data is usually justified by an appeal to the Laws of Physics: the number of free degrees-of-freedom are many fewer than the number of variables; both variables, parameterisations, and parameter values are constrained by ``the physics" and the resulting simulation yields a realistic reproduction of the entire planet's climate system to within reasonable bounds. But what bounds? exactly? In a single model run under transient forcing scenario, there are good statistical grounds for considering only large space and time averages; most of these reasons vanish if an ensemble of runs are made. Ensemble runs can quantify the (in)ability of a model to provide insight on regional changes: if a model cannot capture regional variations in the data on which the model was constructed (that is, in-sample) claims that out-of-sample predictions of those same regional averages should be used in policy making are vacuous. While motivated by climate modelling and illustrated on a trivial nonlinear system, these issues have implications across the range of geophysical modelling. These include implications for appropriate resource allocation, on the making of science policy, and on the public understanding of science and the role of uncertainty in decision making.
Uncertainties in Galactic Chemical Evolution Models
Cote, Benoit; Ritter, Christian; Oshea, Brian W.; ...
2016-06-15
Here we use a simple one-zone galactic chemical evolution model to quantify the uncertainties generated by the input parameters in numerical predictions for a galaxy with properties similar to those of the Milky Way. We compiled several studies from the literature to gather the current constraints for our simulations regarding the typical value and uncertainty of the following seven basic parameters: the lower and upper mass limits of the stellar initial mass function (IMF), the slope of the high-mass end of the stellar IMF, the slope of the delay-time distribution function of Type Ia supernovae (SNe Ia), the number ofmore » SNe Ia per M ⊙ formed, the total stellar mass formed, and the final mass of gas. We derived a probability distribution function to express the range of likely values for every parameter, which were then included in a Monte Carlo code to run several hundred simulations with randomly selected input parameters. This approach enables us to analyze the predicted chemical evolution of 16 elements in a statistical manner by identifying the most probable solutions along with their 68% and 95% confidence levels. Our results show that the overall uncertainties are shaped by several input parameters that individually contribute at different metallicities, and thus at different galactic ages. The level of uncertainty then depends on the metallicity and is different from one element to another. Among the seven input parameters considered in this work, the slope of the IMF and the number of SNe Ia are currently the two main sources of uncertainty. The thicknesses of the uncertainty bands bounded by the 68% and 95% confidence levels are generally within 0.3 and 0.6 dex, respectively. When looking at the evolution of individual elements as a function of galactic age instead of metallicity, those same thicknesses range from 0.1 to 0.6 dex for the 68% confidence levels and from 0.3 to 1.0 dex for the 95% confidence levels. The uncertainty in our chemical evolution model does not include uncertainties relating to stellar yields, star formation and merger histories, and modeling assumptions.« less
Bayesian evidence computation for model selection in non-linear geoacoustic inference problems.
Dettmer, Jan; Dosso, Stan E; Osler, John C
2010-12-01
This paper applies a general Bayesian inference approach, based on Bayesian evidence computation, to geoacoustic inversion of interface-wave dispersion data. Quantitative model selection is carried out by computing the evidence (normalizing constants) for several model parameterizations using annealed importance sampling. The resulting posterior probability density estimate is compared to estimates obtained from Metropolis-Hastings sampling to ensure consistent results. The approach is applied to invert interface-wave dispersion data collected on the Scotian Shelf, off the east coast of Canada for the sediment shear-wave velocity profile. Results are consistent with previous work on these data but extend the analysis to a rigorous approach including model selection and uncertainty analysis. The results are also consistent with core samples and seismic reflection measurements carried out in the area.
Optimizing larval assessment to support sea lamprey control in the Great Lakes
Hansen, Michael J.; Adams, Jean V.; Cuddy, Douglas W.; Richards, Jessica M.; Fodale, Michael F.; Larson, Geraldine L.; Ollila, Dale J.; Slade, Jeffrey W.; Steeves, Todd B.; Young, Robert J.; Zerrenner, Adam
2003-01-01
Elements of the larval sea lamprey (Petromyzon marinus) assessment program that most strongly influence the chemical treatment program were analyzed, including selection of streams for larval surveys, allocation of sampling effort among stream reaches, allocation of sampling effort among habitat types, estimation of daily growth rates, and estimation of metamorphosis rates, to determine how uncertainty in each element influenced the stream selection program. First, the stream selection model based on current larval assessment sampling protocol significantly underestimated transforming sea lam-prey abundance, transforming sea lampreys killed, and marginal costs per sea lamprey killed, compared to a protocol that included more years of data (especially for large streams). Second, larval density in streams varied significantly with Type-I habitat area, but not with total area or reach length. Third, the ratio of larval density between Type-I and Type-II habitat varied significantly among streams, and that the optimal allocation of sampling effort varied with the proportion of habitat types and variability of larval density within each habitat. Fourth, mean length varied significantly among streams and years. Last, size at metamorphosis varied more among years than within or among regions and that metamorphosis varied significantly among streams within regions. Study results indicate that: (1) the stream selection model should be used to identify streams with potentially high residual populations of larval sea lampreys; (2) larval sampling in Type-II habitat should be initiated in all streams by increasing sampling in Type-II habitat to 50% of the sampling effort in Type-I habitat; and (3) methods should be investigated to reduce uncertainty in estimates of sea lamprey production, with emphasis on those that reduce the uncertainty associated with larval length at the end of the growing season and those used to predict metamorphosis.
A Bayesian model averaging method for the derivation of reservoir operating rules
NASA Astrophysics Data System (ADS)
Zhang, Jingwen; Liu, Pan; Wang, Hao; Lei, Xiaohui; Zhou, Yanlai
2015-09-01
Because the intrinsic dynamics among optimal decision making, inflow processes and reservoir characteristics are complex, functional forms of reservoir operating rules are always determined subjectively. As a result, the uncertainty of selecting form and/or model involved in reservoir operating rules must be analyzed and evaluated. In this study, we analyze the uncertainty of reservoir operating rules using the Bayesian model averaging (BMA) model. Three popular operating rules, namely piecewise linear regression, surface fitting and a least-squares support vector machine, are established based on the optimal deterministic reservoir operation. These individual models provide three-member decisions for the BMA combination, enabling the 90% release interval to be estimated by the Markov Chain Monte Carlo simulation. A case study of China's the Baise reservoir shows that: (1) the optimal deterministic reservoir operation, superior to any reservoir operating rules, is used as the samples to derive the rules; (2) the least-squares support vector machine model is more effective than both piecewise linear regression and surface fitting; (3) BMA outperforms any individual model of operating rules based on the optimal trajectories. It is revealed that the proposed model can reduce the uncertainty of operating rules, which is of great potential benefit in evaluating the confidence interval of decisions.
NASA Astrophysics Data System (ADS)
Sun, Yun-Hsiang; Sun, Yuming; Wu, Christine Qiong; Sepehri, Nariman
2018-04-01
Parameters of friction model identified for a specific control system development are not constants. They vary over time and have a significant effect on the control system stability. Although much research has been devoted to the stability analysis under parametric uncertainty, less attention has been paid to incorporating a realistic friction model into their analysis. After reviewing the common friction models for controller design, a modified LuGre friction model is selected to carry out the stability analysis in this study. Two parameters of the LuGre model, namely σ0 and σ1, are critical to the demonstration of dynamic friction features, yet the identification of which is difficult to carry out, resulting in a high level of uncertainties in their values. Aiming at uncovering the effect of the σ0 and σ1 variations on the control system stability, a servomechanism with modified LuGre friction model is investigated. Two set-point position controllers are synthesised based on the servomechanism model to form two case studies. Through Lyapunov exponents, it is clear that the variation of σ0 and σ1 has an obvious effect on the stabiltiy of the studied systems and should not be overlooked in the design phase.
A Fuzzy Robust Optimization Model for Waste Allocation Planning Under Uncertainty
Xu, Ye; Huang, Guohe; Xu, Ling
2014-01-01
Abstract In this study, a fuzzy robust optimization (FRO) model was developed for supporting municipal solid waste management under uncertainty. The Development Zone of the City of Dalian, China, was used as a study case for demonstration. Comparing with traditional fuzzy models, the FRO model made improvement by considering the minimization of the weighted summation among the expected objective values, the differences between two extreme possible objective values, and the penalty of the constraints violation as the objective function, instead of relying purely on the minimization of expected value. Such an improvement leads to enhanced system reliability and the model becomes especially useful when multiple types of uncertainties and complexities are involved in the management system. Through a case study, the applicability of the FRO model was successfully demonstrated. Solutions under three future planning scenarios were provided by the FRO model, including (1) priority on economic development, (2) priority on environmental protection, and (3) balanced consideration for both. The balanced scenario solution was recommended for decision makers, since it respected both system economy and reliability. The model proved valuable in providing a comprehensive profile about the studied system and helping decision makers gain an in-depth insight into system complexity and select cost-effective management strategies. PMID:25317037
A Fuzzy Robust Optimization Model for Waste Allocation Planning Under Uncertainty.
Xu, Ye; Huang, Guohe; Xu, Ling
2014-10-01
In this study, a fuzzy robust optimization (FRO) model was developed for supporting municipal solid waste management under uncertainty. The Development Zone of the City of Dalian, China, was used as a study case for demonstration. Comparing with traditional fuzzy models, the FRO model made improvement by considering the minimization of the weighted summation among the expected objective values, the differences between two extreme possible objective values, and the penalty of the constraints violation as the objective function, instead of relying purely on the minimization of expected value. Such an improvement leads to enhanced system reliability and the model becomes especially useful when multiple types of uncertainties and complexities are involved in the management system. Through a case study, the applicability of the FRO model was successfully demonstrated. Solutions under three future planning scenarios were provided by the FRO model, including (1) priority on economic development, (2) priority on environmental protection, and (3) balanced consideration for both. The balanced scenario solution was recommended for decision makers, since it respected both system economy and reliability. The model proved valuable in providing a comprehensive profile about the studied system and helping decision makers gain an in-depth insight into system complexity and select cost-effective management strategies.
Co-optimization of CO 2 -EOR and Storage Processes under Geological Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ampomah, William; Balch, Robert; Will, Robert
This paper presents an integrated numerical framework to co-optimize EOR and CO 2 storage performance in the Farnsworth field unit (FWU), Ochiltree County, Texas. The framework includes a field-scale compositional reservoir flow model, an uncertainty quantification model and a neural network optimization process. The reservoir flow model has been constructed based on the field geophysical, geological, and engineering data. A laboratory fluid analysis was tuned to an equation of state and subsequently used to predict the thermodynamic minimum miscible pressure (MMP). A history match of primary and secondary recovery processes was conducted to estimate the reservoir and multiphase flow parametersmore » as the baseline case for analyzing the effect of recycling produced gas, infill drilling and water alternating gas (WAG) cycles on oil recovery and CO 2 storage. A multi-objective optimization model was defined for maximizing both oil recovery and CO 2 storage. The uncertainty quantification model comprising the Latin Hypercube sampling, Monte Carlo simulation, and sensitivity analysis, was used to study the effects of uncertain variables on the defined objective functions. Uncertain variables such as bottom hole injection pressure, WAG cycle, injection and production group rates, and gas-oil ratio among others were selected. The most significant variables were selected as control variables to be used for the optimization process. A neural network optimization algorithm was utilized to optimize the objective function both with and without geological uncertainty. The vertical permeability anisotropy (Kv/Kh) was selected as one of the uncertain parameters in the optimization process. The simulation results were compared to a scenario baseline case that predicted CO 2 storage of 74%. The results showed an improved approach for optimizing oil recovery and CO 2 storage in the FWU. The optimization process predicted more than 94% of CO 2 storage and most importantly about 28% of incremental oil recovery. The sensitivity analysis reduced the number of control variables to decrease computational time. A risk aversion factor was used to represent results at various confidence levels to assist management in the decision-making process. The defined objective functions were proved to be a robust approach to co-optimize oil recovery and CO 2 storage. The Farnsworth CO 2 project will serve as a benchmark for future CO 2–EOR or CCUS projects in the Anadarko basin or geologically similar basins throughout the world.« less
Co-optimization of CO 2 -EOR and Storage Processes under Geological Uncertainty
Ampomah, William; Balch, Robert; Will, Robert; ...
2017-07-01
This paper presents an integrated numerical framework to co-optimize EOR and CO 2 storage performance in the Farnsworth field unit (FWU), Ochiltree County, Texas. The framework includes a field-scale compositional reservoir flow model, an uncertainty quantification model and a neural network optimization process. The reservoir flow model has been constructed based on the field geophysical, geological, and engineering data. A laboratory fluid analysis was tuned to an equation of state and subsequently used to predict the thermodynamic minimum miscible pressure (MMP). A history match of primary and secondary recovery processes was conducted to estimate the reservoir and multiphase flow parametersmore » as the baseline case for analyzing the effect of recycling produced gas, infill drilling and water alternating gas (WAG) cycles on oil recovery and CO 2 storage. A multi-objective optimization model was defined for maximizing both oil recovery and CO 2 storage. The uncertainty quantification model comprising the Latin Hypercube sampling, Monte Carlo simulation, and sensitivity analysis, was used to study the effects of uncertain variables on the defined objective functions. Uncertain variables such as bottom hole injection pressure, WAG cycle, injection and production group rates, and gas-oil ratio among others were selected. The most significant variables were selected as control variables to be used for the optimization process. A neural network optimization algorithm was utilized to optimize the objective function both with and without geological uncertainty. The vertical permeability anisotropy (Kv/Kh) was selected as one of the uncertain parameters in the optimization process. The simulation results were compared to a scenario baseline case that predicted CO 2 storage of 74%. The results showed an improved approach for optimizing oil recovery and CO 2 storage in the FWU. The optimization process predicted more than 94% of CO 2 storage and most importantly about 28% of incremental oil recovery. The sensitivity analysis reduced the number of control variables to decrease computational time. A risk aversion factor was used to represent results at various confidence levels to assist management in the decision-making process. The defined objective functions were proved to be a robust approach to co-optimize oil recovery and CO 2 storage. The Farnsworth CO 2 project will serve as a benchmark for future CO 2–EOR or CCUS projects in the Anadarko basin or geologically similar basins throughout the world.« less
Extending Data Worth Analyses to Select Multiple Observations Targeting Multiple Forecasts.
Vilhelmsen, Troels N; Ferré, Ty P A
2018-05-01
Hydrological models are often set up to provide specific forecasts of interest. Owing to the inherent uncertainty in data used to derive model structure and used to constrain parameter variations, the model forecasts will be uncertain. Additional data collection is often performed to minimize this forecast uncertainty. Given our common financial restrictions, it is critical that we identify data with maximal information content with respect to forecast of interest. In practice, this often devolves to qualitative decisions based on expert opinion. However, there is no assurance that this will lead to optimal design, especially for complex hydrogeological problems. Specifically, these complexities include considerations of multiple forecasts, shared information among potential observations, information content of existing data, and the assumptions and simplifications underlying model construction. In the present study, we extend previous data worth analyses to include: simultaneous selection of multiple new measurements and consideration of multiple forecasts of interest. We show how the suggested approach can be used to optimize data collection. This can be used in a manner that suggests specific measurement sets or that produces probability maps indicating areas likely to be informative for specific forecasts. Moreover, we provide examples documenting that sequential measurement election approaches often lead to suboptimal designs and that estimates of data covariance should be included when selecting future measurement sets. © 2017, National Ground Water Association.
Solving multistage stochastic programming models of portfolio selection with outstanding liabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edirisinghe, C.
1994-12-31
Models for portfolio selection in the presence of an outstanding liability have received significant attention, for example, models for pricing options. The problem may be described briefly as follows: given a set of risky securities (and a riskless security such as a bond), and given a set of cash flows, i.e., outstanding liability, to be met at some future date, determine an initial portfolio and a dynamic trading strategy for the underlying securities such that the initial cost of the portfolio is within a prescribed wealth level and the expected cash surpluses arising from trading is maximized. While the tradingmore » strategy should be self-financing, there may also be other restrictions such as leverage and short-sale constraints. Usually the treatment is limited to binomial evolution of uncertainty (of stock price), with possible extensions for developing computational bounds for multinomial generalizations. Posing as stochastic programming models of decision making, we investigate alternative efficient solution procedures under continuous evolution of uncertainty, for discrete time economies. We point out an important moment problem arising in the portfolio selection problem, the solution (or bounds) on which provides the basis for developing efficient computational algorithms. While the underlying stochastic program may be computationally tedious even for a modest number of trading opportunities (i.e., time periods), the derived algorithms may used to solve problems whose sizes are beyond those considered within stochastic optimization.« less
Optimization of Geothermal Well Placement under Geological Uncertainty
NASA Astrophysics Data System (ADS)
Schulte, Daniel O.; Arnold, Dan; Demyanov, Vasily; Sass, Ingo; Geiger, Sebastian
2017-04-01
Well placement optimization is critical to commercial success of geothermal projects. However, uncertainties of geological parameters prohibit optimization based on a single scenario of the subsurface, particularly when few expensive wells are to be drilled. The optimization of borehole locations is usually based on numerical reservoir models to predict reservoir performance and entails the choice of objectives to optimize (total enthalpy, minimum enthalpy rate, production temperature) and the development options to adjust (well location, pump rate, difference in production and injection temperature). Optimization traditionally requires trying different development options on a single geological realization yet there are many possible different interpretations possible. Therefore, we aim to optimize across a range of representative geological models to account for geological uncertainty in geothermal optimization. We present an approach that uses a response surface methodology based on a large number of geological realizations selected by experimental design to optimize the placement of geothermal wells in a realistic field example. A large number of geological scenarios and design options were simulated and the response surfaces were constructed using polynomial proxy models, which consider both geological uncertainties and design parameters. The polynomial proxies were validated against additional simulation runs and shown to provide an adequate representation of the model response for the cases tested. The resulting proxy models allow for the identification of the optimal borehole locations given the mean response of the geological scenarios from the proxy (i.e. maximizing or minimizing the mean response). The approach is demonstrated on the realistic Watt field example by optimizing the borehole locations to maximize the mean heat extraction from the reservoir under geological uncertainty. The training simulations are based on a comprehensive semi-synthetic data set of a hierarchical benchmark case study for a hydrocarbon reservoir, which specifically considers the interpretational uncertainty in the modeling work flow. The optimal choice of boreholes prolongs the time to cold water breakthrough and allows for higher pump rates and increased water production temperatures.
Link, William; Sauer, John R.
2016-01-01
The analysis of ecological data has changed in two important ways over the last 15 years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper, we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion. We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using three large data sets from the North American Breeding Bird Survey.
Bayesian-information-gap decision theory with an application to CO 2 sequestration
O'Malley, D.; Vesselinov, V. V.
2015-09-04
Decisions related to subsurface engineering problems such as groundwater management, fossil fuel production, and geologic carbon sequestration are frequently challenging because of an overabundance of uncertainties (related to conceptualizations, parameters, observations, etc.). Because of the importance of these problems to agriculture, energy, and the climate (respectively), good decisions that are scientifically defensible must be made despite the uncertainties. We describe a general approach to making decisions for challenging problems such as these in the presence of severe uncertainties that combines probabilistic and non-probabilistic methods. The approach uses Bayesian sampling to assess parametric uncertainty and Information-Gap Decision Theory (IGDT) to addressmore » model inadequacy. The combined approach also resolves an issue that frequently arises when applying Bayesian methods to real-world engineering problems related to the enumeration of possible outcomes. In the case of zero non-probabilistic uncertainty, the method reduces to a Bayesian method. Lastly, to illustrate the approach, we apply it to a site-selection decision for geologic CO 2 sequestration.« less
Safety envelope for load tolerance of structural element design based on multi-stage testing
Park, Chanyoung; Kim, Nam H.
2016-09-06
Structural elements, such as stiffened panels and lap joints, are basic components of aircraft structures. For aircraft structural design, designers select predesigned elements satisfying the design load requirement based on their load-carrying capabilities. Therefore, estimation of safety envelope of structural elements for load tolerances would be a good investment for design purpose. In this article, a method of estimating safety envelope is presented using probabilistic classification, which can estimate a specific level of failure probability under both aleatory and epistemic uncertainties. An important contribution of this article is that the calculation uncertainty is reflected in building a safety envelope usingmore » Gaussian process, and the effect of element test data on reducing the calculation uncertainty is incorporated by updating the Gaussian process model with the element test data. It is shown that even one element test can significantly reduce the calculation uncertainty due to lacking knowledge of actual physics, so that conservativeness in a safety envelope is significantly reduced. The proposed approach was demonstrated with a cantilever beam example, which represents a structural element. The example shows that calculation uncertainty provides about 93% conservativeness against the uncertainty due to a few element tests. As a result, it is shown that even a single element test can increase the load tolerance modeled with the safety envelope by 20%.« less
Sperstad, Iver Bakken; Stålhane, Magnus; Dinwoodie, Iain; ...
2017-09-23
Optimising the operation and maintenance (O&M) and logistics strategy of offshore wind farms implies the decision problem of selecting the vessel fleet for O&M. Different strategic decision support tools can be applied to this problem, but much uncertainty remains regarding both input data and modelling assumptions. Our paper aims to investigate and ultimately reduce this uncertainty by comparing four simulation tools, one mathematical optimisation tool and one analytic spreadsheet-based tool applied to select the O&M access vessel fleet that minimizes the total O&M cost of a reference wind farm. The comparison shows that the tools generally agree on the optimalmore » vessel fleet, but only partially agree on the relative ranking of the different vessel fleets in terms of total O&M cost. The robustness of the vessel fleet selection to various input data assumptions was tested, and the ranking was found to be particularly sensitive to the vessels' limiting significant wave height for turbine access. Also the parameter with the greatest discrepancy between the tools, implies that accurate quantification and modelling of this parameter is crucial. The ranking is moderately sensitive to turbine failure rates and vessel day rates but less sensitive to electricity price and vessel transit speed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sperstad, Iver Bakken; Stålhane, Magnus; Dinwoodie, Iain
Optimising the operation and maintenance (O&M) and logistics strategy of offshore wind farms implies the decision problem of selecting the vessel fleet for O&M. Different strategic decision support tools can be applied to this problem, but much uncertainty remains regarding both input data and modelling assumptions. Our paper aims to investigate and ultimately reduce this uncertainty by comparing four simulation tools, one mathematical optimisation tool and one analytic spreadsheet-based tool applied to select the O&M access vessel fleet that minimizes the total O&M cost of a reference wind farm. The comparison shows that the tools generally agree on the optimalmore » vessel fleet, but only partially agree on the relative ranking of the different vessel fleets in terms of total O&M cost. The robustness of the vessel fleet selection to various input data assumptions was tested, and the ranking was found to be particularly sensitive to the vessels' limiting significant wave height for turbine access. Also the parameter with the greatest discrepancy between the tools, implies that accurate quantification and modelling of this parameter is crucial. The ranking is moderately sensitive to turbine failure rates and vessel day rates but less sensitive to electricity price and vessel transit speed.« less
1995-03-01
advisory system provides a decision framework for selecting an appropriate model from the nuimerous available transport models conditinni-ed on...l1, T ,TV Groundwater Modeling, Contaminant Transport , Optimi2atio’ 2; Total Reliability, Remediation Si , , -J % UNCLASSIFIED UNCLASSIFIED...0 0 0 0 S 0 Sn S Even with the choice of an appropriate transport model, considlrable uncertainty is likely to be present in the analysis of
Selection of climate change scenario data for impact modelling.
Sloth Madsen, M; Maule, C Fox; MacKellar, N; Olesen, J E; Christensen, J Hesselbjerg
2012-01-01
Impact models investigating climate change effects on food safety often need detailed climate data. The aim of this study was to select climate change projection data for selected crop phenology and mycotoxin impact models. Using the ENSEMBLES database of climate model output, this study illustrates how the projected climate change signal of important variables as temperature, precipitation and relative humidity depends on the choice of the climate model. Using climate change projections from at least two different climate models is recommended to account for model uncertainty. To make the climate projections suitable for impact analysis at the local scale a weather generator approach was adopted. As the weather generator did not treat all the necessary variables, an ad-hoc statistical method was developed to synthesise realistic values of missing variables. The method is presented in this paper, applied to relative humidity, but it could be adopted to other variables if needed.
Uncertainty Estimation in Tsunami Initial Condition From Rapid Bayesian Finite Fault Modeling
NASA Astrophysics Data System (ADS)
Benavente, R. F.; Dettmer, J.; Cummins, P. R.; Urrutia, A.; Cienfuegos, R.
2017-12-01
It is well known that kinematic rupture models for a given earthquake can present discrepancies even when similar datasets are employed in the inversion process. While quantifying this variability can be critical when making early estimates of the earthquake and triggered tsunami impact, "most likely models" are normally used for this purpose. In this work, we quantify the uncertainty of the tsunami initial condition for the great Illapel earthquake (Mw = 8.3, 2015, Chile). We focus on utilizing data and inversion methods that are suitable to rapid source characterization yet provide meaningful and robust results. Rupture models from teleseismic body and surface waves as well as W-phase are derived and accompanied by Bayesian uncertainty estimates from linearized inversion under positivity constraints. We show that robust and consistent features about the rupture kinematics appear when working within this probabilistic framework. Moreover, by using static dislocation theory, we translate the probabilistic slip distributions into seafloor deformation which we interpret as a tsunami initial condition. After considering uncertainty, our probabilistic seafloor deformation models obtained from different data types appear consistent with each other providing meaningful results. We also show that selecting just a single "representative" solution from the ensemble of initial conditions for tsunami propagation may lead to overestimating information content in the data. Our results suggest that rapid, probabilistic rupture models can play a significant role during emergency response by providing robust information about the extent of the disaster.
Reliability of Current Biokinetic and Dosimetric Models for Radionuclides: A Pilot Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leggett, Richard Wayne; Eckerman, Keith F; Meck, Robert A.
2008-10-01
This report describes the results of a pilot study of the reliability of the biokinetic and dosimetric models currently used by the U.S. Nuclear Regulatory Commission (NRC) as predictors of dose per unit internal or external exposure to radionuclides. The study examines the feasibility of critically evaluating the accuracy of these models for a comprehensive set of radionuclides of concern to the NRC. Each critical evaluation would include: identification of discrepancies between the models and current databases; characterization of uncertainties in model predictions of dose per unit intake or unit external exposure; characterization of variability in dose per unit intakemore » or unit external exposure; and evaluation of prospects for development of more accurate models. Uncertainty refers here to the level of knowledge of a central value for a population, and variability refers to quantitative differences between different members of a population. This pilot study provides a critical assessment of models for selected radionuclides representing different levels of knowledge of dose per unit exposure. The main conclusions of this study are as follows: (1) To optimize the use of available NRC resources, the full study should focus on radionuclides most frequently encountered in the workplace or environment. A list of 50 radionuclides is proposed. (2) The reliability of a dose coefficient for inhalation or ingestion of a radionuclide (i.e., an estimate of dose per unit intake) may depend strongly on the specific application. Multiple characterizations of the uncertainty in a dose coefficient for inhalation or ingestion of a radionuclide may be needed for different forms of the radionuclide and different levels of information of that form available to the dose analyst. (3) A meaningful characterization of variability in dose per unit intake of a radionuclide requires detailed information on the biokinetics of the radionuclide and hence is not feasible for many infrequently studied radionuclides. (4) The biokinetics of a radionuclide in the human body typically represents the greatest source of uncertainty or variability in dose per unit intake. (5) Characterization of uncertainty in dose per unit exposure is generally a more straightforward problem for external exposure than for intake of a radionuclide. (6) For many radionuclides the most important outcome of a large-scale critical evaluation of databases and biokinetic models for radionuclides is expected to be the improvement of current models. Many of the current models do not fully or accurately reflect available radiobiological or physiological information, either because the models are outdated or because they were based on selective or uncritical use of data or inadequate model structures. In such cases the models should be replaced with physiologically realistic models that incorporate a wider spectrum of information.« less
NASA Astrophysics Data System (ADS)
Lunt, Mark; Rigby, Matt; Manning, Alistair; O'Doherty, Simon; Stavert, Ann; Stanley, Kieran; Young, Dickon; Pitt, Joseph; Bauguitte, Stephane; Allen, Grant; Helfter, Carole; Palmer, Paul
2017-04-01
The Greenhouse gAs Uk and Global Emissions (GAUGE) project aims to quantify the magnitude and uncertainty of key UK greenhouse gas emissions more robustly than previously achieved. Measurements of methane have been taken from a number of tall-tower and surface sites as well as mobile measurement platforms such as a research aircraft and a ferry providing regular transects off the east coast of the UK. Using the UK Met Office's atmospheric transport model, NAME, and a novel Bayesian inversion technique we present estimates of methane emissions from the UK from a number of different combinations of sites to show the robustness of the UK total emissions to network configuration. The impact on uncertainties will be discussed, focusing on the usefulness of the various measurement platforms for constraining UK emissions. We will examine the effects of observation selection and how a priori assumptions about model uncertainty can affect the emission estimates, even within a data-driven hierarchical inversion framework. Finally, we will show the impact of the resolution of the meteorology used to drive the NAME model on emissions estimates, and how to rationalise our understanding of the ability of transport models to represent reality.
Liu, Zun-lei; Yuan, Xing-wei; Yang, Lin-lin; Yan, Li-ping; Zhang, Hui; Cheng, Jia-hua
2015-02-01
Multiple hypotheses are available to explain recruitment rate. Model selection methods can be used to identify the best model that supports a particular hypothesis. However, using a single model for estimating recruitment success is often inadequate for overexploited population because of high model uncertainty. In this study, stock-recruitment data of small yellow croaker in the East China Sea collected from fishery dependent and independent surveys between 1992 and 2012 were used to examine density-dependent effects on recruitment success. Model selection methods based on frequentist (AIC, maximum adjusted R2 and P-values) and Bayesian (Bayesian model averaging, BMA) methods were applied to identify the relationship between recruitment and environment conditions. Interannual variability of the East China Sea environment was indicated by sea surface temperature ( SST) , meridional wind stress (MWS), zonal wind stress (ZWS), sea surface pressure (SPP) and runoff of Changjiang River ( RCR). Mean absolute error, mean squared predictive error and continuous ranked probability score were calculated to evaluate the predictive performance of recruitment success. The results showed that models structures were not consistent based on three kinds of model selection methods, predictive variables of models were spawning abundance and MWS by AIC, spawning abundance by P-values, spawning abundance, MWS and RCR by maximum adjusted R2. The recruitment success decreased linearly with stock abundance (P < 0.01), suggesting overcompensation effect in the recruitment success might be due to cannibalism or food competition. Meridional wind intensity showed marginally significant and positive effects on the recruitment success (P = 0.06), while runoff of Changjiang River showed a marginally negative effect (P = 0.07). Based on mean absolute error and continuous ranked probability score, predictive error associated with models obtained from BMA was the smallest amongst different approaches, while that from models selected based on the P-value of the independent variables was the highest. However, mean squared predictive error from models selected based on the maximum adjusted R2 was highest. We found that BMA method could improve the prediction of recruitment success, derive more accurate prediction interval and quantitatively evaluate model uncertainty.
NASA Astrophysics Data System (ADS)
Lu, D.; Ricciuto, D. M.; Evans, K. J.
2017-12-01
Data-worth analysis plays an essential role in improving the understanding of the subsurface system, in developing and refining subsurface models, and in supporting rational water resources management. However, data-worth analysis is computationally expensive as it requires quantifying parameter uncertainty, prediction uncertainty, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface simulations using standard Monte Carlo (MC) sampling or advanced surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose efficient Bayesian analysis of data-worth using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce the computational cost with the use of multifidelity approximations. As the data-worth analysis involves a great deal of expectation estimations, the cost savings from MLMC in the assessment can be very outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it to a highly heterogeneous oil reservoir simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the estimation obtained from the standard MC. But compared to the standard MC, the MLMC greatly reduces the computational costs in the uncertainty reduction estimation, with up to 600 days cost savings when one processor is used.
Skill Assessment for Coupled Biological/Physical Models of Marine Systems
2009-01-01
cluster analysis e.g., Clark and Corley, 2006) and shown that the dimensions of the problem can be reduced and multivariate and univariate goodness...information; a follow-up analysis (Arhonditsis et al., 2006) reported no relationship between the level of skill assessment presented or the accuracy of the...uncertainty analysis (Beck, 1987), model selection (Kass and Raftery, 1995), model averaging (Hoeting et al., 1999), and scores for probabilistic
Foulger, Gillian R.; Panza, Giuliano F.; Artemieva, Irina M.; Bastow, Ian D.; Cammarano, Fabio; Evans, John R.; Hamilton, Warren B.; Julian, Bruce R.; Lustrino, Michele; Thybo, Hans; ,
2013-01-01
Geological and geodynamic models of the mantle often rely on joint interpretations of published seismic tomography images and petrological/geochemical data. This approach tends to neglect the fundamental limitations of, and uncertainties in, seismic tomography results. These limitations and uncertainties involve theory, correcting for the crust, the lack of rays throughout much of the mantle, the difficulty in obtaining the true strength of anomalies, choice of what background model to subtract to reveal anomalies, and what cross-sections to select for publication. The aim of this review is to provide a relatively non-technical summary of the most important of these problems, collected together in a single paper, and presented in a form accessible to non-seismologists. Appreciation of these issues is essential if final geodynamic models are to be robust, and required by the scientific observations.
Bayesian GGE biplot models applied to maize multi-environments trials.
de Oliveira, L A; da Silva, C P; Nuvunga, J J; da Silva, A Q; Balestre, M
2016-06-17
The additive main effects and multiplicative interaction (AMMI) and the genotype main effects and genotype x environment interaction (GGE) models stand out among the linear-bilinear models used in genotype x environment interaction studies. Despite the advantages of their use to describe genotype x environment (AMMI) or genotype and genotype x environment (GGE) interactions, these methods have known limitations that are inherent to fixed effects models, including difficulty in treating variance heterogeneity and missing data. Traditional biplots include no measure of uncertainty regarding the principal components. The present study aimed to apply the Bayesian approach to GGE biplot models and assess the implications for selecting stable and adapted genotypes. Our results demonstrated that the Bayesian approach applied to GGE models with non-informative priors was consistent with the traditional GGE biplot analysis, although the credible region incorporated into the biplot enabled distinguishing, based on probability, the performance of genotypes, and their relationships with the environments in the biplot. Those regions also enabled the identification of groups of genotypes and environments with similar effects in terms of adaptability and stability. The relative position of genotypes and environments in biplots is highly affected by the experimental accuracy. Thus, incorporation of uncertainty in biplots is a key tool for breeders to make decisions regarding stability selection and adaptability and the definition of mega-environments.
Inflated Uncertainty in Multimodel-Based Regional Climate Projections.
Madsen, Marianne Sloth; Langen, Peter L; Boberg, Fredrik; Christensen, Jens Hesselbjerg
2017-11-28
Multimodel ensembles are widely analyzed to estimate the range of future regional climate change projections. For an ensemble of climate models, the result is often portrayed by showing maps of the geographical distribution of the multimodel mean results and associated uncertainties represented by model spread at the grid point scale. Here we use a set of CMIP5 models to show that presenting statistics this way results in an overestimation of the projected range leading to physically implausible patterns of change on global but also on regional scales. We point out that similar inconsistencies occur in impact analyses relying on multimodel information extracted using statistics at the regional scale, for example, when a subset of CMIP models is selected to represent regional model spread. Consequently, the risk of unwanted impacts may be overestimated at larger scales as climate change impacts will never be realized as the worst (or best) case everywhere.
Pharmacological Fingerprints of Contextual Uncertainty
Ruge, Diane; Stephan, Klaas E.
2016-01-01
Successful interaction with the environment requires flexible updating of our beliefs about the world. By estimating the likelihood of future events, it is possible to prepare appropriate actions in advance and execute fast, accurate motor responses. According to theoretical proposals, agents track the variability arising from changing environments by computing various forms of uncertainty. Several neuromodulators have been linked to uncertainty signalling, but comprehensive empirical characterisation of their relative contributions to perceptual belief updating, and to the selection of motor responses, is lacking. Here we assess the roles of noradrenaline, acetylcholine, and dopamine within a single, unified computational framework of uncertainty. Using pharmacological interventions in a sample of 128 healthy human volunteers and a hierarchical Bayesian learning model, we characterise the influences of noradrenergic, cholinergic, and dopaminergic receptor antagonism on individual computations of uncertainty during a probabilistic serial reaction time task. We propose that noradrenaline influences learning of uncertain events arising from unexpected changes in the environment. In contrast, acetylcholine balances attribution of uncertainty to chance fluctuations within an environmental context, defined by a stable set of probabilistic associations, or to gross environmental violations following a contextual switch. Dopamine supports the use of uncertainty representations to engender fast, adaptive responses. PMID:27846219
NASA Astrophysics Data System (ADS)
Noh, S. J.; Tachikawa, Y.; Shiiba, M.; Yorozu, K.; Kim, S.
2012-04-01
Data assimilation methods have received increased attention to accomplish uncertainty assessment and enhancement of forecasting capability in various areas. Despite of their potentials, applicable software frameworks to probabilistic approaches and data assimilation are still limited because the most of hydrologic modeling software are based on a deterministic approach. In this study, we developed a hydrological modeling framework for sequential data assimilation, so called MPI-OHyMoS. MPI-OHyMoS allows user to develop his/her own element models and to easily build a total simulation system model for hydrological simulations. Unlike process-based modeling framework, this software framework benefits from its object-oriented feature to flexibly represent hydrological processes without any change of the main library. Sequential data assimilation based on the particle filters is available for any hydrologic models based on MPI-OHyMoS considering various sources of uncertainty originated from input forcing, parameters and observations. The particle filters are a Bayesian learning process in which the propagation of all uncertainties is carried out by a suitable selection of randomly generated particles without any assumptions about the nature of the distributions. In MPI-OHyMoS, ensemble simulations are parallelized, which can take advantage of high performance computing (HPC) system. We applied this software framework for short-term streamflow forecasting of several catchments in Japan using a distributed hydrologic model. Uncertainty of model parameters and remotely-sensed rainfall data such as X-band or C-band radar is estimated and mitigated in the sequential data assimilation.
NASA Technical Reports Server (NTRS)
Ruane, Alex C.; Mcdermid, Sonali P.
2017-01-01
We present the Representative Temperature and Precipitation (T&P) GCM Subsetting Approach developed within the Agricultural Model Intercomparison and Improvement Project (AgMIP) to select a practical subset of global climate models (GCMs) for regional integrated assessment of climate impacts when resource limitations do not permit the full ensemble of GCMs to be evaluated given the need to also focus on impacts sector and economics models. Subsetting inherently leads to a loss of information but can free up resources to explore important uncertainties in the integrated assessment that would otherwise be prohibitive. The Representative T&P GCM Subsetting Approach identifies five individual GCMs that capture a profile of the full ensemble of temperature and precipitation change within the growing season while maintaining information about the probability that basic classes of climate changes (relatively cool/wet, cool/dry, middle, hot/wet, and hot/dry) are projected in the full GCM ensemble. We demonstrate the selection methodology for maize impacts in Ames, Iowa, and discuss limitations and situations when additional information may be required to select representative GCMs. We then classify 29 GCMs over all land areas to identify regions and seasons with characteristic diagonal skewness related to surface moisture as well as extreme skewness connected to snow-albedo feedbacks and GCM uncertainty. Finally, we employ this basic approach to recognize that GCM projections demonstrate coherence across space, time, and greenhouse gas concentration pathway. The Representative T&P GCM Subsetting Approach provides a quantitative basis for the determination of useful GCM subsets, provides a practical and coherent approach where previous assessments selected solely on availability of scenarios, and may be extended for application to a range of scales and sectoral impacts.
Implications of asymptomatic carriers for infectious disease transmission and control.
Chisholm, Rebecca H; Campbell, Patricia T; Wu, Yue; Tong, Steven Y C; McVernon, Jodie; Geard, Nicholas
2018-02-01
For infectious pathogens such as Staphylococcus aureus and Streptococcus pneumoniae , some hosts may carry the pathogen and transmit it to others, yet display no symptoms themselves. These asymptomatic carriers contribute to the spread of disease but go largely undetected and can therefore undermine efforts to control transmission. Understanding the natural history of carriage and its relationship to disease is important for the design of effective interventions to control transmission. Mathematical models of infectious diseases are frequently used to inform decisions about control and should therefore accurately capture the role played by asymptomatic carriers. In practice, incorporating asymptomatic carriers into models is challenging due to the sparsity of direct evidence. This absence of data leads to uncertainty in estimates of model parameters and, more fundamentally, in the selection of an appropriate model structure. To assess the implications of this uncertainty, we systematically reviewed published models of carriage and propose a new model of disease transmission with asymptomatic carriage. Analysis of our model shows how different assumptions about the role of asymptomatic carriers can lead to different conclusions about the transmission and control of disease. Critically, selecting an inappropriate model structure, even when parameters are correctly estimated, may lead to over- or under-estimates of intervention effectiveness. Our results provide a more complete understanding of the role of asymptomatic carriers in transmission and highlight the importance of accurately incorporating carriers into models used to make decisions about disease control.
Statistical Analysis of the Uncertainty in Pre-Flight Aerodynamic Database of a Hypersonic Vehicle
NASA Astrophysics Data System (ADS)
Huh, Lynn
The objective of the present research was to develop a new method to derive the aerodynamic coefficients and the associated uncertainties for flight vehicles via post- flight inertial navigation analysis using data from the inertial measurement unit. Statistical estimates of vehicle state and aerodynamic coefficients are derived using Monte Carlo simulation. Trajectory reconstruction using the inertial navigation system (INS) is a simple and well used method. However, deriving realistic uncertainties in the reconstructed state and any associated parameters is not so straight forward. Extended Kalman filters, batch minimum variance estimation and other approaches have been used. However, these methods generally depend on assumed physical models, assumed statistical distributions (usually Gaussian) or have convergence issues for non-linear problems. The approach here assumes no physical models, is applicable to any statistical distribution, and does not have any convergence issues. The new approach obtains the statistics directly from a sufficient number of Monte Carlo samples using only the generally well known gyro and accelerometer specifications and could be applied to the systems of non-linear form and non-Gaussian distribution. When redundant data are available, the set of Monte Carlo simulations are constrained to satisfy the redundant data within the uncertainties specified for the additional data. The proposed method was applied to validate the uncertainty in the pre-flight aerodynamic database of the X-43A Hyper-X research vehicle. In addition to gyro and acceleration data, the actual flight data include redundant measurements of position and velocity from the global positioning system (GPS). The criteria derived from the blend of the GPS and INS accuracy was used to select valid trajectories for statistical analysis. The aerodynamic coefficients were derived from the selected trajectories by either direct extraction method based on the equations in dynamics, or by the inquiry of the pre-flight aerodynamic database. After the application of the proposed method to the case of the X-43A Hyper-X research vehicle, it was found that 1) there were consistent differences in the aerodynamic coefficients from the pre-flight aerodynamic database and post-flight analysis, 2) the pre-flight estimation of the pitching moment coefficients was significantly different from the post-flight analysis, 3) the type of distribution of the states from the Monte Carlo simulation were affected by that of the perturbation parameters, 4) the uncertainties in the pre-flight model were overestimated, 5) the range where the aerodynamic coefficients from the pre-flight aerodynamic database and post-flight analysis are in closest agreement is between Mach *.* and *.* and more data points may be needed between Mach * and ** in the pre-flight aerodynamic database, 6) selection criterion for valid trajectories from the Monte Carlo simulations was mostly driven by the horizontal velocity error, 7) the selection criterion must be based on reasonable model to ensure the validity of the statistics from the proposed method, and 8) the results from the proposed method applied to the two different flights with the identical geometry and similar flight profile were consistent.
Jennings, Simon; Collingridge, Kate
2015-01-01
Existing estimates of fish and consumer biomass in the world's oceans are disparate. This creates uncertainty about the roles of fish and other consumers in biogeochemical cycles and ecosystem processes, the extent of human and environmental impacts and fishery potential. We develop and use a size-based macroecological model to assess the effects of parameter uncertainty on predicted consumer biomass, production and distribution. Resulting uncertainty is large (e.g. median global biomass 4.9 billion tonnes for consumers weighing 1 g to 1000 kg; 50% uncertainty intervals of 2 to 10.4 billion tonnes; 90% uncertainty intervals of 0.3 to 26.1 billion tonnes) and driven primarily by uncertainty in trophic transfer efficiency and its relationship with predator-prey body mass ratios. Even the upper uncertainty intervals for global predictions of consumer biomass demonstrate the remarkable scarcity of marine consumers, with less than one part in 30 million by volume of the global oceans comprising tissue of macroscopic animals. Thus the apparently high densities of marine life seen in surface and coastal waters and frequently visited abundance hotspots will likely give many in society a false impression of the abundance of marine animals. Unexploited baseline biomass predictions from the simple macroecological model were used to calibrate a more complex size- and trait-based model to estimate fisheries yield and impacts. Yields are highly dependent on baseline biomass and fisheries selectivity. Predicted global sustainable fisheries yield increases ≈4 fold when smaller individuals (< 20 cm from species of maximum mass < 1 kg) are targeted in all oceans, but the predicted yields would rarely be accessible in practice and this fishing strategy leads to the collapse of larger species if fishing mortality rates on different size classes cannot be decoupled. Our analyses show that models with minimal parameter demands that are based on a few established ecological principles can support equitable analysis and comparison of diverse ecosystems. The analyses provide insights into the effects of parameter uncertainty on global biomass and production estimates, which have yet to be achieved with complex models, and will therefore help to highlight priorities for future research and data collection. However, the focus on simple model structures and global processes means that non-phytoplankton primary production and several groups, structures and processes of ecological and conservation interest are not represented. Consequently, our simple models become increasingly less useful than more complex alternatives when addressing questions about food web structure and function, biodiversity, resilience and human impacts at smaller scales and for areas closer to coasts.
Uncertainty Assessment of Space-Borne Passive Soil Moisture Retrievals
NASA Technical Reports Server (NTRS)
Quets, Jan; De Lannoy, Gabrielle; Reichle, Rolf; Cosh, Michael; van der Schalie, Robin; Wigneron, Jean-Pierre
2017-01-01
The uncertainty associated with passive soil moisture retrieval is hard to quantify, and known to be underlain by various, diverse, and complex causes. Factors affecting space-borne retrieved soil moisture estimation include: (i) the optimization or inversion method applied to the radiative transfer model (RTM), such as e.g. the Single Channel Algorithm (SCA), or the Land Parameter Retrieval Model (LPRM), (ii) the selection of the observed brightness temperatures (Tbs), e.g. polarization and incidence angle, (iii) the definition of the cost function and the impact of prior information in it, and (iv) the RTM parameterization (e.g. parameterizations officially used by the SMOS L2 and SMAP L2 retrieval products, ECMWF-based SMOS assimilation product, SMAP L4 assimilation product, and perturbations from those configurations). This study aims at disentangling the relative importance of the above-mentioned sources of uncertainty, by carrying out soil moisture retrieval experiments, using SMOS Tb observations in different settings, of which some are mentioned above. The ensemble uncertainties are evaluated at 11 reference CalVal sites, over a time period of more than 5 years. These experimental retrievals were inter-compared, and further confronted with in situ soil moisture measurements and operational SMOS L2 retrievals, using commonly used skill metrics to quantify the temporal uncertainty in the retrievals.
Uncertainties and Systematic Effects on the estimate of stellar masses in high z galaxies
NASA Astrophysics Data System (ADS)
Salimbeni, S.; Fontana, A.; Giallongo, E.; Grazian, A.; Menci, N.; Pentericci, L.; Santini, P.
2009-05-01
We discuss the uncertainties and the systematic effects that exist in the estimates of the stellar masses of high redshift galaxies, using broad band photometry, and how they affect the deduced galaxy stellar mass function. We use at this purpose the latest version of the GOODS-MUSIC catalog. In particular, we discuss the impact of different synthetic models, of the assumed initial mass function and of the selection band. Using Chariot & Bruzual 2007 and Maraston 2005 models we find masses lower than those obtained from Bruzual & Chariot 2003 models. In addition, we find a slight trend as a function of the mass itself comparing these two mass determinations with that from Bruzual & Chariot 2003 models. As consequence, the derived galaxy stellar mass functions show diverse shapes, and their slope depends on the assumed models. Despite these differences, the overall results and scenario is observed in all these cases. The masses obtained with the assumption of the Chabrier initial mass function are in average 0.24 dex lower than those from the Salpeter assumption, at all redshifts, causing a shift of galaxy stellar mass function of the same amount. Finally, using a 4.5 μm-selected sample instead of a Ks-selected one, we add a new population of highly absorbed, dusty galaxies at z~=2-3 of relatively low masses, yielding stronger constraints on the slope of the galaxy stellar mass function at lower masses.
Linking 1D coastal ocean modelling to environmental management: an ensemble approach
NASA Astrophysics Data System (ADS)
Mussap, Giulia; Zavatarelli, Marco; Pinardi, Nadia
2017-12-01
The use of a one-dimensional interdisciplinary numerical model of the coastal ocean as a tool contributing to the formulation of ecosystem-based management (EBM) is explored. The focus is on the definition of an experimental design based on ensemble simulations, integrating variability linked to scenarios (characterised by changes in the system forcing) and to the concurrent variation of selected, and poorly constrained, model parameters. The modelling system used was previously specifically designed for the use in "data-rich" areas, so that horizontal dynamics can be resolved by a diagnostic approach and external inputs can be parameterised by nudging schemes properly calibrated. Ensembles determined by changes in the simulated environmental (physical and biogeochemical) dynamics, under joint forcing and parameterisation variations, highlight the uncertainties associated to the application of specific scenarios that are relevant to EBM, providing an assessment of the reliability of the predicted changes. The work has been carried out by implementing the coupled modelling system BFM-POM1D in an area of Gulf of Trieste (northern Adriatic Sea), considered homogeneous from the point of view of hydrological properties, and forcing it by changing climatic (warming) and anthropogenic (reduction of the land-based nutrient input) pressure. Model parameters affected by considerable uncertainties (due to the lack of relevant observations) were varied jointly with the scenarios of change. The resulting large set of ensemble simulations provided a general estimation of the model uncertainties related to the joint variation of pressures and model parameters. The information of the model result variability aimed at conveying efficiently and comprehensibly the information on the uncertainties/reliability of the model results to non-technical EBM planners and stakeholders, in order to have the model-based information effectively contributing to EBM.
NASA Astrophysics Data System (ADS)
Li, Zhanjie; Yu, Jingshan; Xu, Xinyi; Sun, Wenchao; Pang, Bo; Yue, Jiajia
2018-06-01
Hydrological models are important and effective tools for detecting complex hydrological processes. Different models have different strengths when capturing the various aspects of hydrological processes. Relying on a single model usually leads to simulation uncertainties. Ensemble approaches, based on multi-model hydrological simulations, can improve application performance over single models. In this study, the upper Yalongjiang River Basin was selected for a case study. Three commonly used hydrological models (SWAT, VIC, and BTOPMC) were selected and used for independent simulations with the same input and initial values. Then, the BP neural network method was employed to combine the results from the three models. The results show that the accuracy of BP ensemble simulation is better than that of the single models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Genser, Krzysztof; Hatcher, Robert; Perdue, Gabriel
2016-11-10
The Geant4 toolkit is used to model interactions between particles and matter. Geant4 employs a set of validated physics models that span a wide range of interaction energies. These models are tuned to cover a large variety of possible applications. This raises the critical question of what uncertainties are associated with the Geant4 physics model, or group of models, involved in a simulation project. To address the challenge, we have designed and implemented a comprehen- sive, modular, user-friendly software toolkit that allows the variation of one or more parameters of one or more Geant4 physics models involved in simulation studies.more » It also enables analysis of multiple variants of the resulting physics observables of interest in order to estimate the uncertain- ties associated with the simulation model choices. Key functionalities of the toolkit are presented in this paper and are illustrated with selected results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Y.; Tong, C.; Trainor-Guitten, W. J.
The risk of CO 2 leakage from a deep storage reservoir into a shallow aquifer through a fault is assessed and studied using physics-specific computer models. The hypothetical CO 2 geological sequestration system is composed of three subsystems: a deep storage reservoir, a fault in caprock, and a shallow aquifer, which are modeled respectively by considering sub-domain-specific physics. Supercritical CO 2 is injected into the reservoir subsystem with uncertain permeabilities of reservoir, caprock, and aquifer, uncertain fault location, and injection rate (as a decision variable). The simulated pressure and CO 2/brine saturation are connected to the fault-leakage model as amore » boundary condition. CO 2 and brine fluxes from the fault-leakage model at the fault outlet are then imposed in the aquifer model as a source term. Moreover, uncertainties are propagated from the deep reservoir model, to the fault-leakage model, and eventually to the geochemical model in the shallow aquifer, thus contributing to risk profiles. To quantify the uncertainties and assess leakage-relevant risk, we propose a global sampling-based method to allocate sub-dimensions of uncertain parameters to sub-models. The risk profiles are defined and related to CO 2 plume development for pH value and total dissolved solids (TDS) below the EPA's Maximum Contaminant Levels (MCL) for drinking water quality. A global sensitivity analysis is conducted to select the most sensitive parameters to the risk profiles. The resulting uncertainty of pH- and TDS-defined aquifer volume, which is impacted by CO 2 and brine leakage, mainly results from the uncertainty of fault permeability. Subsequently, high-resolution, reduced-order models of risk profiles are developed as functions of all the decision variables and uncertain parameters in all three subsystems.« less
Cierkens, Katrijn; Plano, Salvatore; Benedetti, Lorenzo; Weijers, Stefan; de Jonge, Jarno; Nopens, Ingmar
2012-01-01
Application of activated sludge models (ASMs) to full-scale wastewater treatment plants (WWTPs) is still hampered by the problem of model calibration of these over-parameterised models. This either requires expert knowledge or global methods that explore a large parameter space. However, a better balance in structure between the submodels (ASM, hydraulic, aeration, etc.) and improved quality of influent data result in much smaller calibration efforts. In this contribution, a methodology is proposed that links data frequency and model structure to calibration quality and output uncertainty. It is composed of defining the model structure, the input data, an automated calibration, confidence interval computation and uncertainty propagation to the model output. Apart from the last step, the methodology is applied to an existing WWTP using three models differing only in the aeration submodel. A sensitivity analysis was performed on all models, allowing the ranking of the most important parameters to select in the subsequent calibration step. The aeration submodel proved very important to get good NH(4) predictions. Finally, the impact of data frequency was explored. Lowering the frequency resulted in larger deviations of parameter estimates from their default values and larger confidence intervals. Autocorrelation due to high frequency calibration data has an opposite effect on the confidence intervals. The proposed methodology opens doors to facilitate and improve calibration efforts and to design measurement campaigns.
NASA Astrophysics Data System (ADS)
Velázquez, J. A.; Schmid, J.; Ricard, S.; Muerth, M. J.; Gauvin St-Denis, B.; Minville, M.; Chaumont, D.; Caya, D.; Ludwig, R.; Turcotte, R.
2012-06-01
Over the recent years, several research efforts investigated the impact of climate change on water resources for different regions of the world. The projection of future river flows is affected by different sources of uncertainty in the hydro-climatic modelling chain. One of the aims of the QBic3 project (Québec-Bavarian International Collaboration on Climate Change) is to assess the contribution to uncertainty of hydrological models by using an ensemble of hydrological models presenting a diversity of structural complexity (i.e. lumped, semi distributed and distributed models). The study investigates two humid, mid-latitude catchments with natural flow conditions; one located in Southern Québec (Canada) and one in Southern Bavaria (Germany). Daily flow is simulated with four different hydrological models, forced by outputs from regional climate models driven by a given number of GCMs' members over a reference (1971-2000) and a future (2041-2070) periods. The results show that the choice of the hydrological model does strongly affect the climate change response of selected hydrological indicators, especially those related to low flows. Indicators related to high flows seem less sensitive on the choice of the hydrological model. Therefore, the computationally less demanding models (usually simple, lumped and conceptual) give a significant level of trust for high and overall mean flows.
NASA Astrophysics Data System (ADS)
Tompkins, Lauren Alexandra
The first measurement of the inelastic cross-section for proton-proton collisions at a center of mass energy of 7 TeV using the ATLAS detector at the Large Hadron Collider is presented. From a dataset corresponding to an integrated luminosity of 20 inverse microbarns, events are selected by requiring activity in scintillation counters mounted in the forward region of the ATLAS detector. An inelastic cross-section of 60.1 +/- 2.1 millibarns is measured for the subset of events visible to the scintillation counters. The uncertainty includes the statistical and systematic uncertainty on the measurement. The visible events satisfy xi > 5 x 10 -6, where xi=MX 2/s is calculated from the invariant mass, MX, of hadrons selected using the largest rapidity gap in the event. For diffractive events this corresponds to requiring at least one of the dissociation masses to be larger than 15.7~GeV. Using an extrapolation dependent on the model for the differential diffractive mass distribution, an inelastic cross-section of 69.1 +/- 2.4 (exp) +/- 6.9 (extr) millibarns is determined, where (exp) indicates the experimental uncertainties and (extr) indicates the uncertainty due to the extrapolation from the limited xi-range to the full inelastic cross-section.
NASA Astrophysics Data System (ADS)
McCabe, T. D.; Flory, S. L.; Wiesner, S.; Dietze, M.
2017-12-01
Forested ecosystems are currently being disrupted by invasive species. One example is the invasive grass Imperata cylindrica (cogongrass), which is widespread in southeastern US pine forests. Pines forests dominate the forest cover of the southeast, and contribute to making the Southeast the United States' largest carbon sink. Cogongrass decreases the colonization of loblolly pine fine roots. If cogongrass continues to invade,this sink could be jeopardized. However, the effects of cogongrass invasion on carbon sequestration are largely unknown. We have projected the effects of elevated CO2 and changing climate on future cogongrass invasion. To test how pine stands are affected by cogongrass, cogongrass invasions were modeled using the Ecosystem Demography 2 (ED2) model, and parameterized using the Predictive Ecosystem Analyzer (PEcAn). ED2 takes into account local meteorological data, stand populations and succession, disturbance, and geochemical pools. PEcAn is a workflow that uses Bayesian sensitivity analyses and variance decomposition to quantify the uncertainty that each parameter contributes to overall model uncertainty. ED2 was run for four NEON and Ameriflux sites in the Southeast from the earliest available census of the site into 2010. These model results were compared to site measures to test for model accuracy and bias. To project the effect of elevated CO2 on cogongrass invasions, ED was run from 2006-2100 at four sites under four separate scenarios: 1) RPC4.5 CO2 and climate, 2) RPC4.5 climate only, with constant CO2 concentrations, 3) RPC4.5 Elevated CO2 only, with climate randomly selected from 2006-2026, 4) Present Day, made from randomly selected measures of CO2 and radiation from 2006-2026. Each scenario was run three times; once with cogongrass absent, once with a low cogongrass abundance, and once with a high cogongrass abundance. Model results suggest that many relevant parameters have high uncertainty due to lack of measurement. Further field work quantifying the carbon cycle, particularly belowground processes and respiration, could help constrain parameter uncertainty.
Regional Seismic Travel-Time Prediction, Uncertainty, and Location Improvement in Western Eurasia
NASA Astrophysics Data System (ADS)
Flanagan, M. P.; Myers, S. C.
2004-12-01
We investigate our ability to improve regional travel-time prediction and seismic event location using an a priori, three-dimensional velocity model of Western Eurasia and North Africa: WENA1.0 [Pasyanos et al., 2004]. Our objective is to improve the accuracy of seismic location estimates and calculate representative location uncertainty estimates. As we focus on the geographic region of Western Eurasia, the Middle East, and North Africa, we develop, test, and validate 3D model-based travel-time prediction models for 30 stations in the study region. Three principal results are presented. First, the 3D WENA1.0 velocity model improves travel-time prediction over the iasp91 model, as measured by variance reduction, for regional Pg, Pn, and P phases recorded at the 30 stations. Second, a distance-dependent uncertainty model is developed and tested for the WENA1.0 model. Third, an end-to-end validation test based on 500 event relocations demonstrates improved location performance over the 1-dimensional iasp91 model. Validation of the 3D model is based on a comparison of approximately 11,000 Pg, Pn, and P travel-time predictions and empirical observations from ground truth (GT) events. Ray coverage for the validation dataset is chosen to provide representative, regional-distance sampling across Eurasia and North Africa. The WENA1.0 model markedly improves travel-time predictions for most stations with an average variance reduction of 25% for all ray paths. We find that improvement is station dependent, with some stations benefiting greatly from WENA1.0 predictions (52% at APA, 33% at BKR, and 32% at NIL), some stations showing moderate improvement (12% at KEV, 14% at BOM, and 12% at TAM), some benefiting only slightly (6% at MOX, and 4% at SVE), and some are degraded (-6% at MLR and -18% at QUE). We further test WENA1.0 by comparing location accuracy with results obtained using the iasp91 model. Again, relocation of these events is dependent on ray paths that evenly sample WENA1.0 and therefore provide an unbiased assessment of location performance. A statistically significant sample is achieved by generating 500 location realizations based on 5 events with location accuracy between 1 km and 5 km. Each realization is a randomly selected event with location determined by randomly selecting 5 stations from the available network. In 340 cases (68% of the instances), locations are improved, and average mislocation is reduced from 31 km to 26 km. Preliminary test of uncertainty estimates suggest that our uncertainty model produces location uncertainty ellipses that are representative of location accuracy. These results highlight the importance of accurate GT datasets in assessing regional travel-time models and demonstrate that an a priori 3D model can markedly improve our ability to locate small magnitude events in a regional monitoring context. This work was performed under the auspices of the U.S. Department of Energy by the University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48, Contribution UCRL-CONF-206386.
Bouska, Kristen; Whitledge, Gregory W.; Lant, Christopher; Schoof, Justin
2018-01-01
Land cover is an important determinant of aquatic habitat and is projected to shift with climate changes, yet climate-driven land cover changes are rarely factored into climate assessments. To quantify impacts and uncertainty of coupled climate and land cover change on warm-water fish species’ distributions, we used an ensemble model approach to project distributions of 14 species. For each species, current range projections were compared to 27 scenario-based projections and aggregated to visualize uncertainty. Multiple regression and model selection techniques were used to identify drivers of range change. Novel, or no-analogue, climates were assessed to evaluate transferability of models. Changes in total probability of occurrence ranged widely across species, from a 63% increase to a 65% decrease. Distributional gains and losses were largely driven by temperature and flow variables and underscore the importance of habitat heterogeneity and connectivity to facilitate adaptation to changing conditions. Finally, novel climate conditions were driven by mean annual maximum temperature, which stresses the importance of understanding the role of temperature on fish physiology and the role of temperature-mitigating management practices.
NASA Astrophysics Data System (ADS)
Fernández, J.; Frías, M. D.; Cabos, W. D.; Cofiño, A. S.; Domínguez, M.; Fita, L.; Gaertner, M. A.; García-Díez, M.; Gutiérrez, J. M.; Jiménez-Guerrero, P.; Liguori, G.; Montávez, J. P.; Romera, R.; Sánchez, E.
2018-03-01
We present an unprecedented ensemble of 196 future climate projections arising from different global and regional model intercomparison projects (MIPs): CMIP3, CMIP5, ENSEMBLES, ESCENA, EURO- and Med-CORDEX. This multi-MIP ensemble includes all regional climate model (RCM) projections publicly available to date, along with their driving global climate models (GCMs). We illustrate consistent and conflicting messages using continental Spain and the Balearic Islands as target region. The study considers near future (2021-2050) changes and their dependence on several uncertainty sources sampled in the multi-MIP ensemble: GCM, future scenario, internal variability, RCM, and spatial resolution. This initial work focuses on mean seasonal precipitation and temperature changes. The results show that the potential GCM-RCM combinations have been explored very unevenly, with favoured GCMs and large ensembles of a few RCMs that do not respond to any ensemble design. Therefore, the grand-ensemble is weighted towards a few models. The selection of a balanced, credible sub-ensemble is challenged in this study by illustrating several conflicting responses between the RCM and its driving GCM and among different RCMs. Sub-ensembles from different initiatives are dominated by different uncertainty sources, being the driving GCM the main contributor to uncertainty in the grand-ensemble. For this analysis of the near future changes, the emission scenario does not lead to a strong uncertainty. Despite the extra computational effort, for mean seasonal changes, the increase in resolution does not lead to important changes.
NASA Astrophysics Data System (ADS)
Grunthal, Gottfried; Stromeyer, Dietrich; Bosse, Christian; Cotton, Fabrice; Bindi, Dino
2017-04-01
The seismic load parameters for the upcoming National Annex to the Eurocode 8 result from the reassessment of the seismic hazard supported by the German Institution for Civil Engineering . This 2016 version of hazard assessment for Germany as target area was based on a comprehensive involvement of all accessible uncertainties in models and parameters into the approach and the provision of a rational framework for facilitating the uncertainties in a transparent way. The developed seismic hazard model represents significant improvements; i.e. it is based on updated and extended databases, comprehensive ranges of models, robust methods and a selection of a set of ground motion prediction equations of their latest generation. The output specifications were designed according to the user oriented needs as suggested by two review teams supervising the entire project. In particular, seismic load parameters were calculated for rock conditions with a vS30 of 800 ms-1 for three hazard levels (10%, 5% and 2% probability of occurrence or exceedance within 50 years) in form of, e.g., uniform hazard spectra (UHS) based on 19 sprectral periods in the range of 0.01 - 3s, seismic hazard maps for spectral response accelerations for different spectral periods or for macroseismic intensities. The developed hazard model consists of a logic tree with 4040 end branches and essential innovations employed to capture epistemic uncertainties and aleatory variabilities. The computation scheme enables the sound calculation of the mean and any quantile of required seismic load parameters. Mean, median and 84th percentiles of load parameters were provided together with the full calculation model to clearly illustrate the uncertainties of such a probabilistic assessment for a region of a low-to-moderate level of seismicity. The regional variations of these uncertainties (e.g. ratios between the mean and median hazard estimations) were analyzed and discussed.
Monte-Carlo modelling to determine optimum filter choices for sub-microsecond optical pyrometry.
Ota, Thomas A; Chapman, David J; Eakins, Daniel E
2017-04-01
When designing a spectral-band pyrometer for use at high time resolutions (sub-μs), there is ambiguity regarding the optimum characteristics for a spectral filter(s). In particular, while prior work has discussed uncertainties in spectral-band pyrometry, there has been little discussion of the effects of noise which is an important consideration in time-resolved, high speed experiments. Using a Monte-Carlo process to simulate the effects of noise, a model of collection from a black body has been developed to give insights into the optimum choices for centre wavelength and passband width. The model was validated and then used to explore the effects of centre wavelength and passband width on measurement uncertainty. This reveals a transition centre wavelength below which uncertainties in calculated temperature are high. To further investigate system performance, simultaneous variation of the centre wavelength and bandpass width of a filter is investigated. Using data reduction, the effects of temperature and noise levels are illustrated and an empirical approximation is determined. The results presented show that filter choice can significantly affect instrument performance and, while best practice requires detailed modelling to achieve optimal performance, the expression presented can be used to aid filter selection.
NASA Astrophysics Data System (ADS)
Gustafsson, Johan; Brolin, Gustav; Cox, Maurice; Ljungberg, Michael; Johansson, Lena; Sjögreen Gleisner, Katarina
2015-11-01
A computer model of a patient-specific clinical 177Lu-DOTATATE therapy dosimetry system is constructed and used for investigating the variability of renal absorbed dose and biologically effective dose (BED) estimates. As patient models, three anthropomorphic computer phantoms coupled to a pharmacokinetic model of 177Lu-DOTATATE are used. Aspects included in the dosimetry-process model are the gamma-camera calibration via measurement of the system sensitivity, selection of imaging time points, generation of mass-density maps from CT, SPECT imaging, volume-of-interest delineation, calculation of absorbed-dose rate via a combination of local energy deposition for electrons and Monte Carlo simulations of photons, curve fitting and integration to absorbed dose and BED. By introducing variabilities in these steps the combined uncertainty in the output quantity is determined. The importance of different sources of uncertainty is assessed by observing the decrease in standard deviation when removing a particular source. The obtained absorbed dose and BED standard deviations are approximately 6% and slightly higher if considering the root mean square error. The most important sources of variability are the compensation for partial volume effects via a recovery coefficient and the gamma-camera calibration via the system sensitivity.
Dynamic wake prediction and visualization with uncertainty analysis
NASA Technical Reports Server (NTRS)
Holforty, Wendy L. (Inventor); Powell, J. David (Inventor)
2005-01-01
A dynamic wake avoidance system utilizes aircraft and atmospheric parameters readily available in flight to model and predict airborne wake vortices in real time. A novel combination of algorithms allows for a relatively simple yet robust wake model to be constructed based on information extracted from a broadcast. The system predicts the location and movement of the wake based on the nominal wake model and correspondingly performs an uncertainty analysis on the wake model to determine a wake hazard zone (no fly zone), which comprises a plurality of wake planes, each moving independently from another. The system selectively adjusts dimensions of each wake plane to minimize spatial and temporal uncertainty, thereby ensuring that the actual wake is within the wake hazard zone. The predicted wake hazard zone is communicated in real time directly to a user via a realistic visual representation. In an example, the wake hazard zone is visualized on a 3-D flight deck display to enable a pilot to visualize or see a neighboring aircraft as well as its wake. The system substantially enhances the pilot's situational awareness and allows for a further safe decrease in spacing, which could alleviate airport and airspace congestion.
Reflections on Heckman and Pinto’s Causal Analysis After Haavelmo
2013-11-01
Econometric Analysis , Cambridge University Press, 477–490, 1995. Halpern, J. (1998). Axiomatizing causal reasoning. In Uncertainty in Artificial...Models, Structural Models and Econometric Policy Evaluation. Elsevier B.V., Amsterdam, 4779–4874. Heckman, J. J. (1979). Sample selection bias as a...Reflections on Heckman and Pinto’s “Causal Analysis After Haavelmo” Judea Pearl University of California, Los Angeles Computer Science Department Los
Calibration of hydrological models using flow-duration curves
NASA Astrophysics Data System (ADS)
Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.
2010-12-01
The degree of belief we have in predictions from hydrologic models depends on how well they can reproduce observations. Calibrations with traditional performance measures such as the Nash-Sutcliffe model efficiency are challenged by problems including: (1) uncertain discharge data, (2) variable importance of the performance with flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. A new calibration method using flow-duration curves (FDCs) was developed which addresses these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) of the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments without resulting in overpredicted simulated uncertainty. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application e.g. using more/less EPs at high/low flows. While the new method is less sensitive to epistemic input/output errors than the normal use of limits of acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow. The results suggest that the new calibration method can be useful when observation time periods for discharge and model input data do not overlap. The new method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.
Prey Selection by an Apex Predator: The Importance of Sampling Uncertainty
Davis, Miranda L.; Stephens, Philip A.; Willis, Stephen G.; Bassi, Elena; Marcon, Andrea; Donaggio, Emanuela; Capitani, Claudia; Apollonio, Marco
2012-01-01
The impact of predation on prey populations has long been a focus of ecologists, but a firm understanding of the factors influencing prey selection, a key predictor of that impact, remains elusive. High levels of variability observed in prey selection may reflect true differences in the ecology of different communities but might also reflect a failure to deal adequately with uncertainties in the underlying data. Indeed, our review showed that less than 10% of studies of European wolf predation accounted for sampling uncertainty. Here, we relate annual variability in wolf diet to prey availability and examine temporal patterns in prey selection; in particular, we identify how considering uncertainty alters conclusions regarding prey selection. Over nine years, we collected 1,974 wolf scats and conducted drive censuses of ungulates in Alpe di Catenaia, Italy. We bootstrapped scat and census data within years to construct confidence intervals around estimates of prey use, availability and selection. Wolf diet was dominated by boar (61.5±3.90 [SE] % of biomass eaten) and roe deer (33.7±3.61%). Temporal patterns of prey densities revealed that the proportion of roe deer in wolf diet peaked when boar densities were low, not when roe deer densities were highest. Considering only the two dominant prey types, Manly's standardized selection index using all data across years indicated selection for boar (mean = 0.73±0.023). However, sampling error resulted in wide confidence intervals around estimates of prey selection. Thus, despite considerable variation in yearly estimates, confidence intervals for all years overlapped. Failing to consider such uncertainty could lead erroneously to the assumption of differences in prey selection among years. This study highlights the importance of considering temporal variation in relative prey availability and accounting for sampling uncertainty when interpreting the results of dietary studies. PMID:23110122
75 FR 54403 - U.S. National Climate Assessment Objectives, Proposed Topics, and Next Steps
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-07
..., methods and design, tools for assessing climate change and impacts, dealing with uncertainty, sources of..., coordination with other Federal climate-related programs, design of documents and tailored communications with... methodological perspectives related to selecting model and downscaling outputs and approaches for their use in...
An objective Bayesian analysis of a crossover design via model selection and model averaging.
Li, Dandan; Sivaganesan, Siva
2016-11-10
Inference about the treatment effect in a crossover design has received much attention over time owing to the uncertainty in the existence of the carryover effect and its impact on the estimation of the treatment effect. Adding to this uncertainty is that the existence of the carryover effect and its size may depend on the presence of the treatment effect and its size. We consider estimation and testing hypothesis about the treatment effect in a two-period crossover design, assuming normally distributed response variable, and use an objective Bayesian approach to test the hypothesis about the treatment effect and to estimate its size when it exists while accounting for the uncertainty about the presence of the carryover effect as well as the treatment and period effects. We evaluate and compare the performance of the proposed approach with a standard frequentist approach using simulated data, and real data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
The Impact of Model Uncertainty on Spatial Compensation in Active Structural Acoustic Control
NASA Technical Reports Server (NTRS)
Cabell, Randolph H.; Gibbs, Gary P.; Sprofera, Joseph D.; Clark, Robert L.
2004-01-01
Turbulent boundary layer (TBL) noise is considered a primary factor in the interior noise experienced by passengers aboard commercial airliners. There have been numerous investigations of interior noise control devoted to aircraft panels; however, practical realization is a challenge since the physical boundary conditions are uncertain at best. In most prior studies, pinned or clamped boundary conditions have been assumed; however, realistic panels likely display a range of varying boundary conditions between these two limits. Uncertainty in boundary conditions is a challenge for control system designers, both in terms of the compensator implemented and the location of actuators and sensors required to achieve the desired control. The impact of model uncertainties, uncertain boundary conditions in particular, on the selection of actuator and sensor locations for structural acoustic control are considered herein. Results from this research effort indicate that it is possible to optimize the design of actuator and sensor location and aperture, which minimizes the impact of boundary conditions on the desired structural acoustic control.
NASA Astrophysics Data System (ADS)
DiGregorio, A.; Wilson, E. L.; Palmer, P. I.; Mao, J.; Feng, L.
2017-12-01
We present the simulated impact of a small (50 instrument) ground network of NASA Goddard Space Flight Center's miniaturized laser heterodyne radiometer (mini-LHR), a small, low cost ( 50k), portable, and high precision CH4 and CO2 measuring instrument. Partnered with AERONET as a non-intrusive accessory, the mini-LHR is able to leverage the 500+ instrument AERONET network for rapid network deployment and testing, and simultaneously retrieve co-located aerosol data, an important input for sattelite measurements. This observing systems simulation experiment (OSSE) uses the 3-D GEOS-Chem chemistry transport model and 50 strategically selected sites to model flux estimate uncertainty reduction of both TCCON and mini-LHR instruments. We found that 50 mini-LHR sites are capable of improving global uncertainty by up to 70%, with local improvements in the Southern Hemisphere reaching to 90%. Our studies show that addition of the mini-LHR to current ground networks will play a major role in reduction of global carbon flux uncertainty.
Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo
Herckenrath, Daan; Langevin, Christian D.; Doherty, John
2011-01-01
Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of generating calibration-constrained parameter fields approximately doubled. Predictive uncertainty variance computed through the NSMC method was compared with that computed through linear analysis. The results were in good agreement, with the NSMC method estimate showing a slightly smaller range of prediction uncertainty than was calculated by the linear method. Copyright 2011 by the American Geophysical Union.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harper, F.T.; Young, M.L.; Miller, L.A.
The development of two new probabilistic accident consequence codes, MACCS and COSYMA, completed in 1990, estimate the risks presented by nuclear installations based on postulated frequencies and magnitudes of potential accidents. In 1991, the US Nuclear Regulatory Commission (NRC) and the Commission of the European Communities (CEC) began a joint uncertainty analysis of the two codes. The objective was to develop credible and traceable uncertainty distributions for the input variables of the codes. Expert elicitation, developed independently, was identified as the best technology available for developing a library of uncertainty distributions for the selected consequence parameters. The study was formulatedmore » jointly and was limited to the current code models and to physical quantities that could be measured in experiments. To validate the distributions generated for the wet deposition input variables, samples were taken from these distributions and propagated through the wet deposition code model along with the Gaussian plume model (GPM) implemented in the MACCS and COSYMA codes. Resulting distributions closely replicated the aggregated elicited wet deposition distributions. Project teams from the NRC and CEC cooperated successfully to develop and implement a unified process for the elaboration of uncertainty distributions on consequence code input parameters. Formal expert judgment elicitation proved valuable for synthesizing the best available information. Distributions on measurable atmospheric dispersion and deposition parameters were successfully elicited from experts involved in the many phenomenological areas of consequence analysis. This volume is the second of a three-volume document describing the project and contains two appendices describing the rationales for the dispersion and deposition data along with short biographies of the 16 experts who participated in the project.« less
NASA Astrophysics Data System (ADS)
Dimassi, Bassem; Guenet, Bertrand; Mary, Bruno; Trochard, Robert; Bouthier, Alain; Duparque, Annie; Sagot, Stéphanie; Houot, Sabine; Morel, Christian; Martin, Manuel
2016-04-01
The land use, land-use change and forestry (LULUCF) activities and crop management (CM) in Europe could be an important carbon sink through soil organic carbon (SOC) sequestration. Recently, the (EU decision 529/2013) requires European Union's member states to assess modalities to include greenhouse gas (GHG) emissions and removals resulting from activities relating to LULUCF and CM into the Union's (GHG) emissions reduction commitment and their national inventories reports (NIR). Tier 1, the commonly used method to estimate emissions for NIR, provides a framework for measuring SOC stocks changes. However, estimations have high uncertainty, especially in response to crop management at regional and specific national contexts. Understanding and quantifying this uncertainty with accurate confidence interval is crucial for reliably reporting and support decision-making and policies that aims to mitigate greenhouse gases through soil C storage. Here, we used the Tier 3 method, consisting of process-based modelling, to address the issue of uncertainty quantification at national scale in France. Specifically, we used 20 Long-term croplands experiments (LTE) in France with more than 100 treatments taking into account different agricultural practices such as tillage, organic amendment, inorganic fertilization, cover crops, etc. These LTE were carefully selected because they are well characterized with periodic SOC stocks monitoring overtime and covered a wide range of pedo-climatic conditions. We applied linear mixed effect model to statistically model, as a function of soil, climate and cropping system characteristics, the uncertainty resulting from applying this Tier 3 approach. The model was fitted on the dataset yielded by comparing the simulated (with the Century model V 4.5) to the observed SOC changes on the LTE at hand. This mixed effect model will then be used to derive uncertainty related to the simulation of SOC stocks changes of the French Soil Monitoring Network (FSMN) where only one measurement is done in 16 Km regular grid. These simulations on the grid will be in turn used for NIR. Preliminary results suggest that the model do not adequately simulate SOC stocks levels but succeeds at capturing SOC changes due to management, despite the fact that the model does not explicitly simulate some management such as tillage. This is probably due to inappropriate model parametrization especially for crops and thus Cinput in the French context and/or model initialization.
NASA Astrophysics Data System (ADS)
Garrigues, S.; Olioso, A.; Calvet, J.-C.; Lafont, S.; Martin, E.; Chanzy, A.; Marloie, O.; Bertrand, N.; Desfonds, V.; Renard, D.
2012-04-01
Vegetation productivity and water balance of Mediterranean regions will be particularly affected by climate and land-use changes. In order to analyze and predict these changes through land surface models, a critical step is to quantify the uncertainties associated with these models (processes, parameters) and their implementation over a long period of time. Besides, uncertainties attached to the data used to force these models (atmospheric forcing, vegetation and soil characteristics, crop management practices...) which are generally available at coarse spatial resolution (>1-10 km) and for a limited number of plant functional types, need to be evaluated. This paper aims at assessing the uncertainties in water (evapotranspiration) and energy fluxes estimated from a Soil Vegetation Atmosphere Transfer (SVAT) model over a Mediterranean agricultural site. While similar past studies focused on particular crop types and limited period of time, the originality of this paper consists in implementing the SVAT model and assessing its uncertainties over a long period of time (10 years), encompassing several cycles of distinct crops (wheat, sorghum, sunflower, peas). The impacts on the SVAT simulations of the following sources of uncertainties are characterized: - Uncertainties in atmospheric forcing are assessed comparing simulations forced with local meteorological measurements and simulations forced with re-analysis atmospheric dataset (SAFRAN database). - Uncertainties in key surface characteristics (soil, vegetation, crop management practises) are tested comparing simulations feeded with standard values from global database (e.g. ECOCLIMAP) and simulations based on in situ or site-calibrated values. - Uncertainties dues to the implementation of the SVAT model over a long period of time are analyzed with regards to crop rotation. The SVAT model being analyzed in this paper is ISBA in its a-gs version which simulates the photosynthesis and its coupling with the stomata conductance, as well as the time course of the plant biomass and the Leaf Area Index (LAI). The experiment was conducted at the INRA-Avignon (France) crop site (ICOS associated site), for which 10 years of energy and water eddy fluxes, soil moisture profiles, vegetation measurements, agricultural practises are available for distinct crop types. The uncertainties in evapotranspiration and energy flux estimates are quantified from both 10-year trend analysis and selected daily cycles spanning a range of atmospheric conditions and phenological stages. While the net radiation flux is correctly simulated, the cumulated latent heat flux is under-estimated. Daily plots indicate i) an overestimation of evapotranspiration over bare soil probably due to an overestimation of the soil water reservoir available for evaporation and ii) an under-estimation of transpiration for developed canopy. Uncertainties attached to the re-analysis atmospheric data show little influence on the cumulated values of evapotranspiration. Better performances are reached using in situ soil depths and site-calibrated photosynthesis parameters compared to the simulations based on the ECOCLIMAP standard values. Finally, this paper highlights the impact of the temporal succession of vegetation cover and bare soil on the simulation of soil moisture and evapotranspiration over a long period of time. Thus, solutions to account for crop rotation in the implementation of SVAT models are discussed.
NASA Astrophysics Data System (ADS)
Chakraborty, A.; Goto, H.
2017-12-01
The 2011 off the Pacific coast of Tohoku earthquake caused severe damage in many areas further inside the mainland because of site-amplification. Furukawa district in Miyagi Prefecture, Japan recorded significant spatial differences in ground motion even at sub-kilometer scales. The site responses in the damage zone far exceeded the levels in the hazard maps. A reason why the mismatch occurred is that mapping follow only the mean value at the measurement locations with no regard to the data uncertainties and thus are not always reliable. Our research objective is to develop a methodology to incorporate data uncertainties in mapping and propose a reliable map. The methodology is based on a hierarchical Bayesian modeling of normally-distributed site responses in space where the mean (μ), site-specific variance (σ2) and between-sites variance(s2) parameters are treated as unknowns with a prior distribution. The observation data is artificially created site responses with varying means and variances for 150 seismic events across 50 locations in one-dimensional space. Spatially auto-correlated random effects were added to the mean (μ) using a conditionally autoregressive (CAR) prior. The inferences on the unknown parameters are done using Markov Chain Monte Carlo methods from the posterior distribution. The goal is to find reliable estimates of μ sensitive to uncertainties. During initial trials, we observed that the tau (=1/s2) parameter of CAR prior controls the μ estimation. Using a constraint, s = 1/(k×σ), five spatial models with varying k-values were created. We define reliability to be measured by the model likelihood and propose the maximum likelihood model to be highly reliable. The model with maximum likelihood was selected using a 5-fold cross-validation technique. The results show that the maximum likelihood model (μ*) follows the site-specific mean at low uncertainties and converges to the model-mean at higher uncertainties (Fig.1). This result is highly significant as it successfully incorporates the effect of data uncertainties in mapping. This novel approach can be applied to any research field using mapping techniques. The methodology is now being applied to real records from a very dense seismic network in Furukawa district, Miyagi Prefecture, Japan to generate a reliable map of the site responses.
Larocque, Guy R.; Bhatti, Jagtar S.; Liu, Jinxun; Ascough, James C.; Gordon, Andrew M.
2008-01-01
Many process-based models of carbon (C) and nitrogen (N) cycles have been developed for terrestrial ecosystems, including forest ecosystems. They address many basic issues of ecosystems structure and functioning, such as the role of internal feedback in ecosystem dynamics. The critical factor in these phenomena is scale, as these processes operate at scales from the minute (e.g. particulate pollution impacts on trees and other organisms) to the global (e.g. climate change). Research efforts remain important to improve the capability of such models to better represent the dynamics of terrestrial ecosystems, including the C, nutrient, (e.g. N) and water cycles. Existing models are sufficiently well advanced to help decision makers develop sustainable management policies and planning of terrestrial ecosystems, as they make realistic predictions when used appropriately. However, decision makers must be aware of their limitations by having the opportunity to evaluate the uncertainty associated with process-based models (Smith and Heath, 2001 and Allen et al., 2004). The variation in scale of issues currently being addressed by modelling efforts makes the evaluation of uncertainty a daunting task.
Robust portfolio selection based on asymmetric measures of variability of stock returns
NASA Astrophysics Data System (ADS)
Chen, Wei; Tan, Shaohua
2009-10-01
This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.
Robust evaluation of time series classification algorithms for structural health monitoring
NASA Astrophysics Data System (ADS)
Harvey, Dustin Y.; Worden, Keith; Todd, Michael D.
2014-03-01
Structural health monitoring (SHM) systems provide real-time damage and performance information for civil, aerospace, and mechanical infrastructure through analysis of structural response measurements. The supervised learning methodology for data-driven SHM involves computation of low-dimensional, damage-sensitive features from raw measurement data that are then used in conjunction with machine learning algorithms to detect, classify, and quantify damage states. However, these systems often suffer from performance degradation in real-world applications due to varying operational and environmental conditions. Probabilistic approaches to robust SHM system design suffer from incomplete knowledge of all conditions a system will experience over its lifetime. Info-gap decision theory enables nonprobabilistic evaluation of the robustness of competing models and systems in a variety of decision making applications. Previous work employed info-gap models to handle feature uncertainty when selecting various components of a supervised learning system, namely features from a pre-selected family and classifiers. In this work, the info-gap framework is extended to robust feature design and classifier selection for general time series classification through an efficient, interval arithmetic implementation of an info-gap data model. Experimental results are presented for a damage type classification problem on a ball bearing in a rotating machine. The info-gap framework in conjunction with an evolutionary feature design system allows for fully automated design of a time series classifier to meet performance requirements under maximum allowable uncertainty.
Credibilistic multi-period portfolio optimization based on scenario tree
NASA Astrophysics Data System (ADS)
Mohebbi, Negin; Najafi, Amir Abbas
2018-02-01
In this paper, we consider a multi-period fuzzy portfolio optimization model with considering transaction costs and the possibility of risk-free investment. We formulate a bi-objective mean-VaR portfolio selection model based on the integration of fuzzy credibility theory and scenario tree in order to dealing with the markets uncertainty. The scenario tree is also a proper method for modeling multi-period portfolio problems since the length and continuity of their horizon. We take the return and risk as well cardinality, threshold, class, and liquidity constraints into consideration for further compliance of the model with reality. Then, an interactive dynamic programming method, which is based on a two-phase fuzzy interactive approach, is employed to solve the proposed model. In order to verify the proposed model, we present an empirical application in NYSE under different circumstances. The results show that the consideration of data uncertainty and other real-world assumptions lead to more practical and efficient solutions.
Dynamics and control of quadcopter using linear model predictive control approach
NASA Astrophysics Data System (ADS)
Islam, M.; Okasha, M.; Idres, M. M.
2017-12-01
This paper investigates the dynamics and control of a quadcopter using the Model Predictive Control (MPC) approach. The dynamic model is of high fidelity and nonlinear, with six degrees of freedom that include disturbances and model uncertainties. The control approach is developed based on MPC to track different reference trajectories ranging from simple ones such as circular to complex helical trajectories. In this control technique, a linearized model is derived and the receding horizon method is applied to generate the optimal control sequence. Although MPC is computer expensive, it is highly effective to deal with the different types of nonlinearities and constraints such as actuators’ saturation and model uncertainties. The MPC parameters (control and prediction horizons) are selected by trial-and-error approach. Several simulation scenarios are performed to examine and evaluate the performance of the proposed control approach using MATLAB and Simulink environment. Simulation results show that this control approach is highly effective to track a given reference trajectory.
NASA Astrophysics Data System (ADS)
Frost, Andrew J.; Thyer, Mark A.; Srikanthan, R.; Kuczera, George
2007-07-01
SummaryMulti-site simulation of hydrological data are required for drought risk assessment of large multi-reservoir water supply systems. In this paper, a general Bayesian framework is presented for the calibration and evaluation of multi-site hydrological data at annual timescales. Models included within this framework are the hidden Markov model (HMM) and the widely used lag-1 autoregressive (AR(1)) model. These models are extended by the inclusion of a Box-Cox transformation and a spatial correlation function in a multi-site setting. Parameter uncertainty is evaluated using Markov chain Monte Carlo techniques. Models are evaluated by their ability to reproduce a range of important extreme statistics and compared using Bayesian model selection techniques which evaluate model probabilities. The case study, using multi-site annual rainfall data situated within catchments which contribute to Sydney's main water supply, provided the following results: Firstly, in terms of model probabilities and diagnostics, the inclusion of the Box-Cox transformation was preferred. Secondly the AR(1) and HMM performed similarly, while some other proposed AR(1)/HMM models with regionally pooled parameters had greater posterior probability than these two models. The practical significance of parameter and model uncertainty was illustrated using a case study involving drought security analysis for urban water supply. It was shown that ignoring parameter uncertainty resulted in a significant overestimate of reservoir yield and an underestimation of system vulnerability to severe drought.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Nicholas; Burns, Joseph R.
The aftermath of the Tōhoku earthquake and the Fukushima accident has led to a global push to improve the safety of existing light water reactors. A key component of this initiative is the development of nuclear fuel and cladding materials with potentially enhanced accident tolerance, also known as accident-tolerant fuels (ATF). These materials are intended to improve core fuel and cladding integrity under beyond design basis accident conditions while maintaining or enhancing reactor performance and safety characteristics during normal operation. To complement research that has already been carried out to characterize ATF neutronics, the present study provides an initial investigationmore » of the sensitivity and uncertainty of ATF systems responses to nuclear cross section data. ATF concepts incorporate novel materials, including SiC and FeCrAl cladding and high density uranium silicide composite fuels, in turn introducing new cross section sensitivities and uncertainties which may behave differently from traditional fuel and cladding materials. In this paper, we conducted sensitivity and uncertainty analysis using the TSUNAMI-2D sequence of SCALE with infinite lattice models of ATF assemblies. Of all the ATF materials considered, it is found that radiative capture in 56Fe in FeCrAl cladding is the most significant contributor to eigenvalue uncertainty. 56Fe yields significant potential eigenvalue uncertainty associated with its radiative capture cross section; this is by far the largest ATF-specific uncertainty found in these cases, exceeding even those of uranium. We found that while significant new sensitivities indeed arise, the general sensitivity behavior of ATF assemblies does not markedly differ from traditional UO2/zirconium-based fuel/cladding systems, especially with regard to uncertainties associated with uranium. We assessed the similarity of the IPEN/MB-01 reactor benchmark model to application models with FeCrAl cladding. We used TSUNAMI-IP to calculate similarity indices of the application model and IPEN/MB-01 reactor benchmark model. This benchmark was selected for its use of SS304 as a cladding and structural material, with significant 56Fe content. The similarity indices suggest that while many differences in reactor physics arise from differences in design, sensitivity to and behavior of 56Fe absorption is comparable between systems, thus indicating the potential for this benchmark to reduce uncertainties in 56Fe radiative capture cross sections.« less
A hierarchical modeling methodology for the definition and selection of requirements
NASA Astrophysics Data System (ADS)
Dufresne, Stephane
This dissertation describes the development of a requirements analysis methodology that takes into account the concept of operations and the hierarchical decomposition of aerospace systems. At the core of the methodology, the Analytic Network Process (ANP) is used to ensure the traceability between the qualitative and quantitative information present in the hierarchical model. The proposed methodology is implemented to the requirements definition of a hurricane tracker Unmanned Aerial Vehicle. Three research objectives are identified in this work; (1) improve the requirements mapping process by matching the stakeholder expectations with the concept of operations, systems and available resources; (2) reduce the epistemic uncertainty surrounding the requirements and requirements mapping; and (3) improve the requirements down-selection process by taking into account the level of importance of the criteria and the available resources. Several challenges are associated with the identification and definition of requirements. The complexity of the system implies that a large number of requirements are needed to define the systems. These requirements are defined early in the conceptual design, where the level of knowledge is relatively low and the level of uncertainty is large. The proposed methodology intends to increase the level of knowledge and reduce the level of uncertainty by guiding the design team through a structured process. To address these challenges, a new methodology is created to flow-down the requirements from the stakeholder expectations to the systems alternatives. A taxonomy of requirements is created to classify the information gathered during the problem definition. Subsequently, the operational and systems functions and measures of effectiveness are integrated to a hierarchical model to allow the traceability of the information. Monte Carlo methods are used to evaluate the variations of the hierarchical model elements and consequently reduce the epistemic uncertainty. The proposed methodology is applied to the design of a hurricane tracker Unmanned Aerial Vehicles to demonstrate the origin and impact of requirements on the concept of operations and systems alternatives. This research demonstrates that the hierarchical modeling methodology provides a traceable flow-down of the requirements from the problem definition to the systems alternatives phases of conceptual design.
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates basedmore » on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four projections, and associated kriging variances, were averaged using the posterior model probabilities as weights. Finally, cross-validation was conducted by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of the model-averaged result with that of each individual model. Using two quantitative measures of comparison, the model-averaged result was superior to any individual geostatistical model of log permeability considered.« less
A prospective earthquake forecast experiment in the western Pacific
NASA Astrophysics Data System (ADS)
Eberhard, David A. J.; Zechar, J. Douglas; Wiemer, Stefan
2012-09-01
Since the beginning of 2009, the Collaboratory for the Study of Earthquake Predictability (CSEP) has been conducting an earthquake forecast experiment in the western Pacific. This experiment is an extension of the Kagan-Jackson experiments begun 15 years earlier and is a prototype for future global earthquake predictability experiments. At the beginning of each year, seismicity models make a spatially gridded forecast of the number of Mw≥ 5.8 earthquakes expected in the next year. For the three participating statistical models, we analyse the first two years of this experiment. We use likelihood-based metrics to evaluate the consistency of the forecasts with the observed target earthquakes and we apply measures based on Student's t-test and the Wilcoxon signed-rank test to compare the forecasts. Overall, a simple smoothed seismicity model (TripleS) performs the best, but there are some exceptions that indicate continued experiments are vital to fully understand the stability of these models, the robustness of model selection and, more generally, earthquake predictability in this region. We also estimate uncertainties in our results that are caused by uncertainties in earthquake location and seismic moment. Our uncertainty estimates are relatively small and suggest that the evaluation metrics are relatively robust. Finally, we consider the implications of our results for a global earthquake forecast experiment.
Wellman, Tristan P.; Poeter, Eileen P.
2006-01-01
Computational limitations and sparse field data often mandate use of continuum representation for modeling hydrologic processes in large‐scale fractured aquifers. Selecting appropriate element size is of primary importance because continuum approximation is not valid for all scales. The traditional approach is to select elements by identifying a single representative elementary scale (RES) for the region of interest. Recent advances indicate RES may be spatially variable, prompting unanswered questions regarding the ability of sparse data to spatially resolve continuum equivalents in fractured aquifers. We address this uncertainty of estimating RES using two techniques. In one technique we employ data‐conditioned realizations generated by sequential Gaussian simulation. For the other we develop a new approach using conditioned random walks and nonparametric bootstrapping (CRWN). We evaluate the effectiveness of each method under three fracture densities, three data sets, and two groups of RES analysis parameters. In sum, 18 separate RES analyses are evaluated, which indicate RES magnitudes may be reasonably bounded using uncertainty analysis, even for limited data sets and complex fracture structure. In addition, we conduct a field study to estimate RES magnitudes and resulting uncertainty for Turkey Creek Basin, a crystalline fractured rock aquifer located 30 km southwest of Denver, Colorado. Analyses indicate RES does not correlate to rock type or local relief in several instances but is generally lower within incised creek valleys and higher along mountain fronts. Results of this study suggest that (1) CRWN is an effective and computationally efficient method to estimate uncertainty, (2) RES predictions are well constrained using uncertainty analysis, and (3) for aquifers such as Turkey Creek Basin, spatial variability of RES is significant and complex.
Model Comparison in Subsurface Science: The DECOVALEX and Sim-SEQ Initiatives (Invited)
NASA Astrophysics Data System (ADS)
Birkholzer, J. T.; Mukhopadhyay, S.; Rutqvist, J.; Tsang, C.
2013-12-01
Building predictive model for flow and transport processes in the subsurface is a challenging task, even more so if these processes are coupled to geomechanical and/or geochemical effects. Modelers must take into consideration a multiplicity of length scales, a wide range of time scales, the coupling between processes, different model components, and the spatial variability in the value of most model input parameters (and often limited knowledge about them). Consequently, modelers have to make choices while developing their conceptual models. Such model choices may cause a wide range in the predictions made by different models and different modeling groups, even if each of the underlying simulators has been perfectly verified against appropriate benchmarks. In other words, the modeling activity itself is prone to uncertainty and bias. This uncertainty, referred to here as model selection uncertainty, forms one of the greatest sources of uncertainty for predictive modeling. In this paper, we discuss two examples of model intercomparison exercises that are currently undertaken to better understand model selection uncertainty, elucidate system behavior, inform needs for data collection and better physics parameterizations, and enhance community understanding of capabilities. The first example is the international DECOVALEX project, which was launched in 1992 by a group of countries dealing with modeling issues related to geologic disposal of radioactive waste. DECOVALEX is an acronym for DEvelopment of COupled THM models and their VALidation against Experiments. To date, the project has progressed successfully through five stages, each of which featuring a small number of test cases for model comparison related to coupled thermo-hydro-mechanical (THM) processes in geologic systems. The test cases are proposed and developed by the organizations participating in DECOVALEX; they typically involve results from major field and laboratory experiments. Over the past decades, the DECOVALEX project has played a major role in improving our understanding of coupled THM processes in fractured rock and buffer/backfill materials, a subject of importance to performance assessment of a radioactive waste geologic repository. The second example is the Sim-SEQ project, a relatively recent model comparison initiative addressing multi-phase processes relevant in geologic carbon sequestration. Like DECOVALEX, Sim-SEQ is not about benchmarking, but rather about evaluating model building efforts in a broad and comprehensive sense. In Sim-SEQ, sixteen international modeling teams are building their own models for a specific carbon sequestration site referred to as the Sim-SEQ Study site (the S-3 site). The S-3 site is patterned after the ongoing SECARB Phase III Early Test site in southwestern Mississippi, where CO2 is injected into a fluvial sandstone unit with high vertical and lateral heterogeneity. The complex geology of the S-3 site, its location in the water leg of a CO2-EOR field with a strong water drive, and the presence of methane in the reservoir brine make this a challenging task, requiring the modelers to use their best judgment in making a large number of choices about how to model various processes and properties of the system.
Adapting regional watershed management to climate change in Bavaria and Québec
NASA Astrophysics Data System (ADS)
Ludwig, Ralf; Muerth, Markus; Schmid, Josef; Jobst, Andreas; Caya, Daniel; Gauvin St-Denis, Blaise; Chaumont, Diane; Velazquez, Juan-Alberto; Turcotte, Richard; Ricard, Simon
2013-04-01
The international research project QBic3 (Quebec-Bavarian Collaboration on Climate Change) aims at investigating the potential impacts of climate change on the hydrology of regional scale catchments in Southern Quebec (Canada) and Bavaria (Germany). For this purpose, a hydro-meteorological modeling chain has been established, applying climatic forcing from both dynamical and statistical climate model data to an ensemble of hydrological models of varying complexity. The selection of input data, process descriptions and scenarios allows for the inter-comparison of the uncertainty ranges on selected runoff indicators; a methodology to display the relative importance of each source of uncertainty is developed and results for past runoff (1971-2000) and potential future changes (2041-2070) are obtained. Finally, the impact of hydrological changes on the operational management of dams, reservoirs and transfer systems is investigated and shown for the Bavarian case studies, namely the potential change in i) hydro-power production for the Upper Isar watershed and ii) low flow augmentation and water transfer rates at the Donau-Main transfer system in Central Franconia. Two overall findings will be presented and discussed in detail: a) the climate change response of selected hydrological indicators, especially those related to low flows, is strongly affected by the choice of the hydrological model. It can be shown that an assessment of the changes in the hydrological cycle is best represented by a complex physically based hydrological model, computationally less demanding models (usually simple, lumped and conceptual) can give a significant level of trust for selected indicators. b) the major differences in the projected climate forcing stemming from the ensemble of dynamic climate models (GCM/RCM) versus the statistical-stochastical WETTREG2010 approach. While the dynamic ensemble reveals a moderate modification of the hydrological processes in the investigated catchments, the WETTREG2010 driven runs show a severe detraction for all water operations, mainly related to a strong decline in projected precipitation in all seasons (except winter).
Jennings, Simon; Collingridge, Kate
2015-01-01
Existing estimates of fish and consumer biomass in the world’s oceans are disparate. This creates uncertainty about the roles of fish and other consumers in biogeochemical cycles and ecosystem processes, the extent of human and environmental impacts and fishery potential. We develop and use a size-based macroecological model to assess the effects of parameter uncertainty on predicted consumer biomass, production and distribution. Resulting uncertainty is large (e.g. median global biomass 4.9 billion tonnes for consumers weighing 1 g to 1000 kg; 50% uncertainty intervals of 2 to 10.4 billion tonnes; 90% uncertainty intervals of 0.3 to 26.1 billion tonnes) and driven primarily by uncertainty in trophic transfer efficiency and its relationship with predator-prey body mass ratios. Even the upper uncertainty intervals for global predictions of consumer biomass demonstrate the remarkable scarcity of marine consumers, with less than one part in 30 million by volume of the global oceans comprising tissue of macroscopic animals. Thus the apparently high densities of marine life seen in surface and coastal waters and frequently visited abundance hotspots will likely give many in society a false impression of the abundance of marine animals. Unexploited baseline biomass predictions from the simple macroecological model were used to calibrate a more complex size- and trait-based model to estimate fisheries yield and impacts. Yields are highly dependent on baseline biomass and fisheries selectivity. Predicted global sustainable fisheries yield increases ≈4 fold when smaller individuals (< 20 cm from species of maximum mass < 1kg) are targeted in all oceans, but the predicted yields would rarely be accessible in practice and this fishing strategy leads to the collapse of larger species if fishing mortality rates on different size classes cannot be decoupled. Our analyses show that models with minimal parameter demands that are based on a few established ecological principles can support equitable analysis and comparison of diverse ecosystems. The analyses provide insights into the effects of parameter uncertainty on global biomass and production estimates, which have yet to be achieved with complex models, and will therefore help to highlight priorities for future research and data collection. However, the focus on simple model structures and global processes means that non-phytoplankton primary production and several groups, structures and processes of ecological and conservation interest are not represented. Consequently, our simple models become increasingly less useful than more complex alternatives when addressing questions about food web structure and function, biodiversity, resilience and human impacts at smaller scales and for areas closer to coasts. PMID:26226590
An efficient Bayesian data-worth analysis using a multilevel Monte Carlo method
NASA Astrophysics Data System (ADS)
Lu, Dan; Ricciuto, Daniel; Evans, Katherine
2018-03-01
Improving the understanding of subsurface systems and thus reducing prediction uncertainty requires collection of data. As the collection of subsurface data is costly, it is important that the data collection scheme is cost-effective. Design of a cost-effective data collection scheme, i.e., data-worth analysis, requires quantifying model parameter, prediction, and both current and potential data uncertainties. Assessment of these uncertainties in large-scale stochastic subsurface hydrological model simulations using standard Monte Carlo (MC) sampling or surrogate modeling is extremely computationally intensive, sometimes even infeasible. In this work, we propose an efficient Bayesian data-worth analysis using a multilevel Monte Carlo (MLMC) method. Compared to the standard MC that requires a significantly large number of high-fidelity model executions to achieve a prescribed accuracy in estimating expectations, the MLMC can substantially reduce computational costs using multifidelity approximations. Since the Bayesian data-worth analysis involves a great deal of expectation estimation, the cost saving of the MLMC in the assessment can be outstanding. While the proposed MLMC-based data-worth analysis is broadly applicable, we use it for a highly heterogeneous two-phase subsurface flow simulation to select an optimal candidate data set that gives the largest uncertainty reduction in predicting mass flow rates at four production wells. The choices made by the MLMC estimation are validated by the actual measurements of the potential data, and consistent with the standard MC estimation. But compared to the standard MC, the MLMC greatly reduces the computational costs.
A game-based decision support methodology for competitive systems design
NASA Astrophysics Data System (ADS)
Briceno, Simon Ignacio
This dissertation describes the development of a game-based methodology that facilitates the exploration and selection of research and development (R&D) projects under uncertain competitive scenarios. The proposed method provides an approach that analyzes competitor positioning and formulates response strategies to forecast the impact of technical design choices on a project's market performance. A critical decision in the conceptual design phase of propulsion systems is the selection of the best architecture, centerline, core size, and technology portfolio. This selection can be challenging when considering evolving requirements from both the airframe manufacturing company and the airlines in the market. Furthermore, the exceedingly high cost of core architecture development and its associated risk makes this strategic architecture decision the most important one for an engine company. Traditional conceptual design processes emphasize performance and affordability as their main objectives. These areas alone however, do not provide decision-makers with enough information as to how successful their engine will be in a competitive market. A key objective of this research is to examine how firm characteristics such as their relative differences in completing R&D projects, differences in the degree of substitutability between different project types, and first/second-mover advantages affect their product development strategies. Several quantitative methods are investigated that analyze business and engineering strategies concurrently. In particular, formulations based on the well-established mathematical field of game theory are introduced to obtain insights into the project selection problem. The use of game theory is explored in this research as a method to assist the selection process of R&D projects in the presence of imperfect market information. The proposed methodology focuses on two influential factors: the schedule uncertainty of project completion times and the uncertainty associated with competitive reactions. A normal-form matrix is created to enumerate players, their moves and payoffs, and to formulate a process by which an optimal decision can be achieved. The non-cooperative model is tested using the concept of a Nash equilibrium to identify potential strategies that are robust to uncertain market fluctuations (e.g: uncertainty in airline demand, airframe requirements and competitor positioning). A first/second-mover advantage parameter is used as a scenario dial to adjust market rewards and firms' payoffs. The methodology is applied to a commercial aircraft engine selection study where engine firms must select an optimal engine project for development. An engine modeling and simulation framework is developed to generate a broad engine project portfolio. The creation of a customer value model enables designers to incorporate airline operation characteristics into the engine modeling and simulation process to improve the accuracy of engine/customer matching. Summary. Several key findings are made that provide recommendations on project selection strategies for firms uncertain as to when they will enter the market. The proposed study demonstrates that within a technical design environment, a rational and analytical means of modeling project development strategies is beneficial in high market risk situations.
Musella, Vincenzo; Rinaldi, Laura; Lagazio, Corrado; Cringoli, Giuseppe; Biggeri, Annibale; Catelan, Dolores
2014-09-15
Model-based geostatistics and Bayesian approaches are appropriate in the context of Veterinary Epidemiology when point data have been collected by valid study designs. The aim is to predict a continuous infection risk surface. Little work has been done on the use of predictive infection probabilities at farm unit level. In this paper we show how to use predictive infection probability and related uncertainty from a Bayesian kriging model to draw a informative samples from the 8794 geo-referenced sheep farms of the Campania region (southern Italy). Parasitological data come from a first cross-sectional survey carried out to study the spatial distribution of selected helminths in sheep farms. A grid sampling was performed to select the farms for coprological examinations. Faecal samples were collected for 121 sheep farms and the presence of 21 different helminths were investigated using the FLOTAC technique. The 21 responses are very different in terms of geographical distribution and prevalence of infection. The observed prevalence range is from 0.83% to 96.69%. The distributions of the posterior predictive probabilities for all the 21 parasites are very heterogeneous. We show how the results of the Bayesian kriging model can be used to plan a second wave survey. Several alternatives can be chosen depending on the purposes of the second survey: weight by posterior predictive probabilities, their uncertainty or combining both information. The proposed Bayesian kriging model is simple, and the proposed samping strategy represents a useful tool to address targeted infection control treatments and surbveillance campaigns. It is easily extendable to other fields of research. Copyright © 2014 Elsevier B.V. All rights reserved.
Brandsch, Rainer
2017-10-01
Migration modelling provides reliable migration estimates from food-contact materials (FCM) to food or food simulants based on mass-transfer parameters like diffusion and partition coefficients related to individual materials. In most cases, mass-transfer parameters are not readily available from the literature and for this reason are estimated with a given uncertainty. Historically, uncertainty was accounted for by introducing upper limit concepts first, turning out to be of limited applicability due to highly overestimated migration results. Probabilistic migration modelling gives the possibility to consider uncertainty of the mass-transfer parameters as well as other model inputs. With respect to a functional barrier, the most important parameters among others are the diffusion properties of the functional barrier and its thickness. A software tool that accepts distribution as inputs and is capable of applying Monte Carlo methods, i.e., random sampling from the input distributions of the relevant parameters (i.e., diffusion coefficient and layer thickness), predicts migration results with related uncertainty and confidence intervals. The capabilities of probabilistic migration modelling are presented in the view of three case studies (1) sensitivity analysis, (2) functional barrier efficiency and (3) validation by experimental testing. Based on the predicted migration by probabilistic migration modelling and related exposure estimates, safety evaluation of new materials in the context of existing or new packaging concepts is possible. Identifying associated migration risk and potential safety concerns in the early stage of packaging development is possible. Furthermore, dedicated material selection exhibiting required functional barrier efficiency under application conditions becomes feasible. Validation of the migration risk assessment by probabilistic migration modelling through a minimum of dedicated experimental testing is strongly recommended.
The Agricultural Model Intercomparison and Improvement Project (AgMIP): Protocols and Pilot Studies
NASA Technical Reports Server (NTRS)
Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P.; Antle, J. M.; Nelson, G. C.; Porter, C.; Janssen, S.;
2012-01-01
The Agricultural Model Intercomparison and Improvement Project (AgMIP) is a major international effort linking the climate, crop, and economic modeling communities with cutting-edge information technology to produce improved crop and economic models and the next generation of climate impact projections for the agricultural sector. The goals of AgMIP are to improve substantially the characterization of world food security due to climate change and to enhance adaptation capacity in both developing and developed countries. Analyses of the agricultural impacts of climate variability and change require a transdisciplinary effort to consistently link state-of-the-art climate scenarios to crop and economic models. Crop model outputs are aggregated as inputs to regional and global economic models to determine regional vulnerabilities, changes in comparative advantage, price effects, and potential adaptation strategies in the agricultural sector. Climate, Crop Modeling, Economics, and Information Technology Team Protocols are presented to guide coordinated climate, crop modeling, economics, and information technology research activities around the world, along with AgMIP Cross-Cutting Themes that address uncertainty, aggregation and scaling, and the development of Representative Agricultural Pathways (RAPs) to enable testing of climate change adaptations in the context of other regional and global trends. The organization of research activities by geographic region and specific crops is described, along with project milestones. Pilot results demonstrate AgMIP's role in assessing climate impacts with explicit representation of uncertainties in climate scenarios and simulations using crop and economic models. An intercomparison of wheat model simulations near Obregón, Mexico reveals inter-model differences in yield sensitivity to [CO2] with model uncertainty holding approximately steady as concentrations rise, while uncertainty related to choice of crop model increases with rising temperatures. Wheat model simulations with midcentury climate scenarios project a slight decline in absolute yields that is more sensitive to selection of crop model than to global climate model, emissions scenario, or climate scenario downscaling method. A comparison of regional and national-scale economic simulations finds a large sensitivity of projected yield changes to the simulations' resolved scales. Finally, a global economic model intercomparison example demonstrates that improvements in the understanding of agriculture futures arise from integration of the range of uncertainty in crop, climate, and economic modeling results in multi-model assessments.
Steen, Valerie; Sofaer, Helen R.; Skagen, Susan K.; Ray, Andrea J.; Noon, Barry R
2017-01-01
Species distribution models (SDMs) are commonly used to assess potential climate change impacts on biodiversity, but several critical methodological decisions are often made arbitrarily. We compare variability arising from these decisions to the uncertainty in future climate change itself. We also test whether certain choices offer improved skill for extrapolating to a changed climate and whether internal cross-validation skill indicates extrapolative skill. We compared projected vulnerability for 29 wetland-dependent bird species breeding in the climatically dynamic Prairie Pothole Region, USA. For each species we built 1,080 SDMs to represent a unique combination of: future climate, class of climate covariates, collinearity level, and thresholding procedure. We examined the variation in projected vulnerability attributed to each uncertainty source. To assess extrapolation skill under a changed climate, we compared model predictions with observations from historic drought years. Uncertainty in projected vulnerability was substantial, and the largest source was that of future climate change. Large uncertainty was also attributed to climate covariate class with hydrological covariates projecting half the range loss of bioclimatic covariates or other summaries of temperature and precipitation. We found that choices based on performance in cross-validation improved skill in extrapolation. Qualitative rankings were also highly uncertain. Given uncertainty in projected vulnerability and resulting uncertainty in rankings used for conservation prioritization, a number of considerations appear critical for using bioclimatic SDMs to inform climate change mitigation strategies. Our results emphasize explicitly selecting climate summaries that most closely represent processes likely to underlie ecological response to climate change. For example, hydrological covariates projected substantially reduced vulnerability, highlighting the importance of considering whether water availability may be a more proximal driver than precipitation. However, because cross-validation results were correlated with extrapolation results, the use of cross-validation performance metrics to guide modeling choices where knowledge is limited was supported.
Steen, Valerie; Sofaer, Helen R; Skagen, Susan K; Ray, Andrea J; Noon, Barry R
2017-11-01
Species distribution models (SDMs) are commonly used to assess potential climate change impacts on biodiversity, but several critical methodological decisions are often made arbitrarily. We compare variability arising from these decisions to the uncertainty in future climate change itself. We also test whether certain choices offer improved skill for extrapolating to a changed climate and whether internal cross-validation skill indicates extrapolative skill. We compared projected vulnerability for 29 wetland-dependent bird species breeding in the climatically dynamic Prairie Pothole Region, USA. For each species we built 1,080 SDMs to represent a unique combination of: future climate, class of climate covariates, collinearity level, and thresholding procedure. We examined the variation in projected vulnerability attributed to each uncertainty source. To assess extrapolation skill under a changed climate, we compared model predictions with observations from historic drought years. Uncertainty in projected vulnerability was substantial, and the largest source was that of future climate change. Large uncertainty was also attributed to climate covariate class with hydrological covariates projecting half the range loss of bioclimatic covariates or other summaries of temperature and precipitation. We found that choices based on performance in cross-validation improved skill in extrapolation. Qualitative rankings were also highly uncertain. Given uncertainty in projected vulnerability and resulting uncertainty in rankings used for conservation prioritization, a number of considerations appear critical for using bioclimatic SDMs to inform climate change mitigation strategies. Our results emphasize explicitly selecting climate summaries that most closely represent processes likely to underlie ecological response to climate change. For example, hydrological covariates projected substantially reduced vulnerability, highlighting the importance of considering whether water availability may be a more proximal driver than precipitation. However, because cross-validation results were correlated with extrapolation results, the use of cross-validation performance metrics to guide modeling choices where knowledge is limited was supported.
Quantifying Uncertainty of Wind Power Production Through an Analog Ensemble
NASA Astrophysics Data System (ADS)
Shahriari, M.; Cervone, G.
2016-12-01
The Analog Ensemble (AnEn) method is used to generate probabilistic weather forecasts that quantify the uncertainty in power estimates at hypothetical wind farm locations. The data are from the NREL Eastern Wind Dataset that includes more than 1,300 modeled wind farms. The AnEn model uses a two-dimensional grid to estimate the probability distribution of wind speed (the predictand) given the values of predictor variables such as temperature, pressure, geopotential height, U-component and V-component of wind. The meteorological data is taken from the NCEP GFS which is available on a 0.25 degree grid resolution. The methodology first divides the data into two classes: training period and verification period. The AnEn selects a point in the verification period and searches for the best matching estimates (analogs) in the training period. The predictand value at those analogs are the ensemble prediction for the point in the verification period. The model provides a grid of wind speed values and the uncertainty (probability index) associated with each estimate. Each wind farm is associated with a probability index which quantifies the degree of difficulty to estimate wind power. Further, the uncertainty in estimation is related to other factors such as topography, land cover and wind resources. This is achieved by using a GIS system to compute the correlation between the probability index and geographical characteristics. This study has significant applications for investors in renewable energy sector especially wind farm developers. Lower level of uncertainty facilitates the process of submitting bids into day ahead and real time electricity markets. Thus, building wind farms in regions with lower levels of uncertainty will reduce the real-time operational risks and create a hedge against volatile real-time prices. Further, the links between wind estimate uncertainty and factors such as topography and wind resources, provide wind farm developers with valuable information regarding wind farm siting.
Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization
NASA Astrophysics Data System (ADS)
Tsai, F. T.; Li, X.
2006-12-01
Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.
Lopez-Coto, Israel; Ghosh, Subhomoy; Prasad, Kuldeep; Whetstone, James
2017-09-01
The North-East Corridor (NEC) Testbed project is the 3rd of three NIST (National Institute of Standards and Technology) greenhouse gas emissions testbeds designed to advance greenhouse gas measurements capabilities. A design approach for a dense observing network combined with atmospheric inversion methodologies is described. The Advanced Research Weather Research and Forecasting Model with the Stochastic Time-Inverted Lagrangian Transport model were used to derive the sensitivity of hypothetical observations to surface greenhouse gas emissions (footprints). Unlike other network design algorithms, an iterative selection algorithm, based on a k -means clustering method, was applied to minimize the similarities between the temporal response of each site and maximize sensitivity to the urban emissions contribution. Once a network was selected, a synthetic inversion Bayesian Kalman filter was used to evaluate observing system performance. We present the performances of various measurement network configurations consisting of differing numbers of towers and tower locations. Results show that an overly spatially compact network has decreased spatial coverage, as the spatial information added per site is then suboptimal as to cover the largest possible area, whilst networks dispersed too broadly lose capabilities of constraining flux uncertainties. In addition, we explore the possibility of using a very high density network of lower cost and performance sensors characterized by larger uncertainties and temporal drift. Analysis convergence is faster with a large number of observing locations, reducing the response time of the filter. Larger uncertainties in the observations implies lower values of uncertainty reduction. On the other hand, the drift is a bias in nature, which is added to the observations and, therefore, biasing the retrieved fluxes.
NASA Astrophysics Data System (ADS)
Lopez-Coto, Israel; Ghosh, Subhomoy; Prasad, Kuldeep; Whetstone, James
2017-09-01
The North-East Corridor (NEC) Testbed project is the 3rd of three NIST (National Institute of Standards and Technology) greenhouse gas emissions testbeds designed to advance greenhouse gas measurements capabilities. A design approach for a dense observing network combined with atmospheric inversion methodologies is described. The Advanced Research Weather Research and Forecasting Model with the Stochastic Time-Inverted Lagrangian Transport model were used to derive the sensitivity of hypothetical observations to surface greenhouse gas emissions (footprints). Unlike other network design algorithms, an iterative selection algorithm, based on a k-means clustering method, was applied to minimize the similarities between the temporal response of each site and maximize sensitivity to the urban emissions contribution. Once a network was selected, a synthetic inversion Bayesian Kalman filter was used to evaluate observing system performance. We present the performances of various measurement network configurations consisting of differing numbers of towers and tower locations. Results show that an overly spatially compact network has decreased spatial coverage, as the spatial information added per site is then suboptimal as to cover the largest possible area, whilst networks dispersed too broadly lose capabilities of constraining flux uncertainties. In addition, we explore the possibility of using a very high density network of lower cost and performance sensors characterized by larger uncertainties and temporal drift. Analysis convergence is faster with a large number of observing locations, reducing the response time of the filter. Larger uncertainties in the observations implies lower values of uncertainty reduction. On the other hand, the drift is a bias in nature, which is added to the observations and, therefore, biasing the retrieved fluxes.
Weighing conservation objectives: maximum expected coverage versus endangered species protection
Jeffrey L. Arthur; Jeffrey D. Camm; Robert G. Haight; Claire A. Montgomery; Stephen Polasky
2004-01-01
Decision makers involved in land acquisition and protection often have multiple conservation objectives and are uncertain about the occurrence of species or other features in candidate sites. Model informing decisions on selection of sites for reserves need to provide information about cost-efficient trade-offs between objectives and account for incidence uncertainty...
Some suggested future directions of quantitative resource assessments
Singer, D.A.
2001-01-01
Future quantitative assessments will be expected to estimate quantities, values, and locations of undiscovered mineral resources in a form that conveys both economic viability and uncertainty associated with the resources. Historically, declining metal prices point to the need for larger deposits over time. Sensitivity analysis demonstrates that the greatest opportunity for reducing uncertainty in assessments lies in lowering uncertainty associated with tonnage estimates. Of all errors possible in assessments, those affecting tonnage estimates are by far the most important. Selecting the correct deposit model is the most important way of controlling errors because the dominance of tonnage-deposit models are the best known predictor of tonnage. Much of the surface is covered with apparently barren rocks and sediments in many large regions. Because many exposed mineral deposits are believed to have been found, a prime concern is the presence of possible mineralized rock under cover. Assessments of areas with resources under cover must rely on extrapolation from surrounding areas, new geologic maps of rocks under cover, or analogy with other well-explored areas that can be considered training tracts. Cover has a profound effect on uncertainty and on methods and procedures of assessments because geology is seldom known and geophysical methods typically have attenuated responses. Many earlier assessment methods were based on relationships of geochemical and geophysical variables to deposits learned from deposits exposed on the surface-these will need to be relearned based on covered deposits. Mineral-deposit models are important in quantitative resource assessments for two reasons: (1) grades and tonnages of most deposit types are significantly different, and (2) deposit types are present in different geologic settings that can be identified from geologic maps. Mineral-deposit models are the keystone in combining the diverse geoscience information on geology, mineral occurrences, geophysics, and geochemistry used in resource assessments and mineral exploration. Grade and tonnage models and development of quantitative descriptive, economic, and deposit density models will help reduce the uncertainty of these new assessments.
Towards resiliency with micro-grids: Portfolio optimization and investment under uncertainty
NASA Astrophysics Data System (ADS)
Gharieh, Kaveh
Energy security and sustained supply of power are critical for community welfare and economic growth. In the face of the increased frequency and intensity of extreme weather conditions which can result in power grid outage, the value of micro-grids to improve the communities' power reliability and resiliency is becoming more important. Micro-grids capability to operate in islanded mode in stressed-out conditions, dramatically decreases the economic loss of critical infrastructure in power shortage occasions. More wide-spread participation of micro-grids in the wholesale energy market in near future, makes the development of new investment models necessary. However, market and price risks in short term and long term along with risk factors' impacts shall be taken into consideration in development of new investment models. This work proposes a set of models and tools to address different problems associated with micro-grid assets including optimal portfolio selection, investment and financing in both community and a sample critical infrastructure (i.e. wastewater treatment plant) levels. The models account for short-term operational volatilities and long-term market uncertainties. A number of analytical methodologies and financial concepts have been adopted to develop the aforementioned models as follows. (1) Capital budgeting planning and portfolio optimization models with Monte Carlo stochastic scenario generation are applied to derive the optimal investment decision for a portfolio of micro-grid assets considering risk factors and multiple sources of uncertainties. (2) Real Option theory, Monte Carlo simulation and stochastic optimization techniques are applied to obtain optimal modularized investment decisions for hydrogen tri-generation systems in wastewater treatment facilities, considering multiple sources of uncertainty. (3) Public Private Partnership (PPP) financing concept coupled with investment horizon approach are applied to estimate public and private parties' revenue shares from a community-level micro-grid project over the course of assets' lifetime considering their optimal operation under uncertainty.
NASA Astrophysics Data System (ADS)
Gennaro, M.; Prada Moroni, P. G.; Degl'Innocenti, S.
We discuss the reliability of one of the most used method to determine the Helium-to-metals enrichment ratio, Delta Y / Delta Z, i.e. the photometric comparison of a selected data set of local disk low Main Sequence (MS) stars observed by HIPPARCOS and a new grid of stellar models with up-to-date input physics. Most of the attention has been devoted to evaluate the effects on the final results of different sources of uncertainty (observational errors, evolutionary effects, selection criteria, systematic uncertainties of the models, numerical errors). As a check of the result the procedure has been repeated using another, independent, data set: the low-MS of the Hyades cluster. The obtained of Delta Y/ Delta Z for the Hyades, together with spectroscopic determinations of [Fe/H] ratio, have been used to obtain the Y and Z values for the cluster. Isochrones have been calculated with the estimated chemical composition, obtaining a very good agreement between the predicted position of the Hyades MS and the observational data in the Color - Magnitude Diagram (CMD).
Galactic ΔY/ΔZ from the analysis of solar neighborhood main sequence stars
NASA Astrophysics Data System (ADS)
Gennaro, M.; Moroni, P. G. Prada; Degl'Innocenti, S.
2009-05-01
We discuss the reliability of one of the most used method to determine the helium-to-metals enrichment ratio (ΔY/ΔZ), i.e. the photometric comparison of a selected data set of local disk low main sequence (MS) stars observed by Hipparcos with a new grid of stellar models with up-to-date input physics. Most of the attention has been devoted to evaluate the effects on the final results of different sources of uncertainty (observational errors, evolutionary effects, selection criteria, systematic uncertainties of the models, numerical errors). As a check of the result the procedure has been repeated using another, independent, data set: the low-MS of the Hyades cluster. The obtained ΔY/ΔZ for the Hyades, together with spectroscopic determinations of [Fe/H] ratio, have been used to obtain the Y and Z values for the cluster. Isochrones have been calculated with the estimated chemical composition, obtaining a very good agreement between the predicted position of the Hyades MS and the observational data in the color-magnitude diagram.
Reducing the uncertainty in the fidelity of seismic imaging results
NASA Astrophysics Data System (ADS)
Zhou, H. W.; Zou, Z.
2017-12-01
A key aspect in geoscientific inversion is quantifying the quality of the results. In seismic imaging, we must quantify the uncertainty of every imaging result based on field data, because data noise and methodology limitations may produce artifacts. Detection of artifacts is therefore an important aspect in uncertainty quantification in geoscientific inversion. Quantifying the uncertainty of seismic imaging solutions means assessing their fidelity, which defines the truthfulness of the imaged targets in terms of their resolution, position error and artifact. Key challenges to achieving the fidelity of seismic imaging include: (1) Difficulty to tell signal from artifact and noise; (2) Limitations in signal-to-noise ratio and seismic illumination; and (3) The multi-scale nature of the data space and model space. Most seismic imaging studies of the Earth's crust and mantle have employed inversion or modeling approaches. Though they are in opposite directions of mapping between the data space and model space, both inversion and modeling seek the best model to minimize the misfit in the data space, which unfortunately is not the output space. The fact that the selection and uncertainty of the output model are not judged in the output space has exacerbated the nonuniqueness problem for inversion and modeling. In contrast, the practice in exploration seismology has long established a two-fold approach of seismic imaging: Using velocity modeling building to establish the long-wavelength reference velocity models, and using seismic migration to map the short-wavelength reflectivity structures. Most interestingly, seismic migration maps the data into an output space called imaging space, where the output reflection images of the subsurface are formed based on an imaging condition. A good example is the reverse time migration, which seeks the reflectivity image as the best fit in the image space between the extrapolation of time-reversed waveform data and the prediction based on estimated velocity model and source parameters. I will illustrate the benefits of deciding the best output result in the output space for inversion, using examples from seismic imaging.
Allen, R J; Rieger, T R; Musante, C J
2016-03-01
Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed "virtual patients." In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations.
NASA Astrophysics Data System (ADS)
Yin, Sisi; Nishi, Tatsushi
2014-11-01
Quantity discount policy is decision-making for trade-off prices between suppliers and manufacturers while production is changeable due to demand fluctuations in a real market. In this paper, quantity discount models which consider selection of contract suppliers, production quantity and inventory simultaneously are addressed. The supply chain planning problem with quantity discounts under demand uncertainty is formulated as a mixed-integer nonlinear programming problem (MINLP) with integral terms. We apply an outer-approximation method to solve MINLP problems. In order to improve the efficiency of the proposed method, the problem is reformulated as a stochastic model replacing the integral terms by using a normalisation technique. We present numerical examples to demonstrate the efficiency of the proposed method.
Exploring Several Methods of Groundwater Model Selection
NASA Astrophysics Data System (ADS)
Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar
2017-04-01
Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).
Giltrap, Donna L; Ausseil, Anne-Gaelle E; Thakur, Kailash P; Sutherland, M Anne
2013-11-01
In this study, we developed emission factor (EF) look-up tables for calculating the direct nitrous oxide (N2O) emissions from grazed pasture soils in New Zealand. Look-up tables of long-term average direct emission factors (and their associated uncertainties) were generated using multiple simulations of the NZ-DNDC model over a representative range of major soil, climate and management conditions occurring in New Zealand using 20 years of climate data. These EFs were then combined with national activity data maps to estimate direct N2O emissions from grazed pasture in New Zealand using 2010 activity data. The total direct N2O emissions using look-up tables were 12.7±12.1 Gg N2O-N (equivalent to using a national average EF of 0.70±0.67%). This agreed with the amount calculated using the New Zealand specific EFs (95% confidence interval 7.7-23.1 Gg N2O-N), although the relative uncertainty increased. The high uncertainties in the look-up table EFs were primarily due to the high uncertainty of the soil parameters within the selected soil categories. Uncertainty analyses revealed that the uncertainty in soil parameters contributed much more to the uncertainty in N2O emissions than the inter-annual weather variability. The effect of changes to fertiliser applications was also examined and it was found that for fertiliser application rates of 0-50 kg N/ha for sheep and beef and 60-240 kg N/ha for dairy the modelled EF was within ±10% of the value simulated using annual fertiliser application rates of 15 kg N/ha and 140 kg N/ha respectively. Copyright © 2013 Elsevier B.V. All rights reserved.
Radiotherapy Dose Fractionation under Parameter Uncertainty
NASA Astrophysics Data System (ADS)
Davison, Matt; Kim, Daero; Keller, Harald
2011-11-01
In radiotherapy, radiation is directed to damage a tumor while avoiding surrounding healthy tissue. Tradeoffs ensue because dose cannot be exactly shaped to the tumor. It is particularly important to ensure that sensitive biological structures near the tumor are not damaged more than a certain amount. Biological tissue is known to have a nonlinear response to incident radiation. The linear quadratic dose response model, which requires the specification of two clinically and experimentally observed response coefficients, is commonly used to model this effect. This model yields an optimization problem giving two different types of optimal dose sequences (fractionation schedules). Which fractionation schedule is preferred depends on the response coefficients. These coefficients are uncertainly known and may differ from patient to patient. Because of this not only the expected outcomes but also the uncertainty around these outcomes are important, and it might not be prudent to select the strategy with the best expected outcome.
A Stochastic Climate Generator for Agriculture in Southeast Asian Domains
NASA Astrophysics Data System (ADS)
Greene, A. M.; Allis, E. C.
2014-12-01
We extend a previously-described method for generating future climate scenarios, suitable for driving agricultural models, to selected domains in Lao PDR, Bangladesh and Indonesia. There are notable differences in climatology among the study regions, most importantly the inverse seasonal relationship of southeast Asian and Australian monsoons. These differences necessitate a partially-differentiated modeling approach, utilizing common features for better estimation while allowing independent modeling of divergent attributes. The method attempts to constrain uncertainty due to both anthropogenic and natural influences, providing a measure of how these effects may combine during specified future decades. Seasonal climate fields are downscaled to the daily time step by resampling the AgMERRA dataset, providing a full suite of agriculturally relevant variables and enabling the propagation of climate uncertainty to agricultural outputs. The role of this research in a broader project, conducted under the auspices of the International Fund for Agricultural Development (IFAD), is discussed.
Selecting surrogate endpoints for estimating pesticide effects on avian reproductive success.
Bennett, Richard S; Etterson, Matthew A
2013-10-01
A Markov chain nest productivity model (MCnest) has been developed for projecting the effects of a specific pesticide-use scenario on the annual reproductive success of avian species of concern. A critical element in MCnest is the use of surrogate endpoints, defined as measured endpoints from avian toxicity tests that represent specific types of effects possible in field populations at specific phases of a nesting attempt. In this article, we discuss the attributes of surrogate endpoints and provide guidance for selecting surrogates from existing avian laboratory tests as well as other possible sources. We also discuss some of the assumptions and uncertainties related to using surrogate endpoints to represent field effects. The process of explicitly considering how toxicity test results can be used to assess effects in the field helps identify uncertainties and data gaps that could be targeted in higher-tier risk assessments. © 2013 SETAC.
Delton, Andrew W.; Krasnow, Max M.; Cosmides, Leda; Tooby, John
2011-01-01
Are humans too generous? The discovery that subjects choose to incur costs to allocate benefits to others in anonymous, one-shot economic games has posed an unsolved challenge to models of economic and evolutionary rationality. Using agent-based simulations, we show that such generosity is the necessary byproduct of selection on decision systems for regulating dyadic reciprocity under conditions of uncertainty. In deciding whether to engage in dyadic reciprocity, these systems must balance (i) the costs of mistaking a one-shot interaction for a repeated interaction (hence, risking a single chance of being exploited) with (ii) the far greater costs of mistaking a repeated interaction for a one-shot interaction (thereby precluding benefits from multiple future cooperative interactions). This asymmetry builds organisms naturally selected to cooperate even when exposed to cues that they are in one-shot interactions. PMID:21788489
NASA Astrophysics Data System (ADS)
Pochampally, Kishore K.; Gupta, Surendra M.; Cullinane, Thomas P.
2004-02-01
The cost-benefit analysis of data associated with re-processing of used products often involves the uncertainty feature of cash-flow modeling. The data is not objective because of uncertainties in supply, quality and disassembly times of used products. Hence, decision-makers must rely on "fuzzy" data for analysis. The same parties that are involved in the forward supply chain often carry out the collection and re-processing of used products. It is therefore important that the cost-benefit analysis takes the data of both new products and used products into account. In this paper, a fuzzy cost-benefit function is proposed that is used to perform a multi-criteria economic analysis to select the most economical products to process in a closed-loop supply chain. Application of the function is detailed through an illustrative example.
Delton, Andrew W; Krasnow, Max M; Cosmides, Leda; Tooby, John
2011-08-09
Are humans too generous? The discovery that subjects choose to incur costs to allocate benefits to others in anonymous, one-shot economic games has posed an unsolved challenge to models of economic and evolutionary rationality. Using agent-based simulations, we show that such generosity is the necessary byproduct of selection on decision systems for regulating dyadic reciprocity under conditions of uncertainty. In deciding whether to engage in dyadic reciprocity, these systems must balance (i) the costs of mistaking a one-shot interaction for a repeated interaction (hence, risking a single chance of being exploited) with (ii) the far greater costs of mistaking a repeated interaction for a one-shot interaction (thereby precluding benefits from multiple future cooperative interactions). This asymmetry builds organisms naturally selected to cooperate even when exposed to cues that they are in one-shot interactions.
Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.
2016-01-01
Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Trends in hydrological extremes in the Senegal and the Niger Rivers
NASA Astrophysics Data System (ADS)
Wilcox, C.; Bodian, A.; Vischel, T.; Panthou, G.; Quantin, G.
2017-12-01
In recent years, West Africa has witnessed several floods of unprecedented magnitude. Although the evolution of hydrological extremes has been evaluated in the region to some extent, results lack regional coverage, significance levels, uncertainty estimations, model selection criteria, or a combination of the above. In this study, Generalized Extreme Value (GEV) distributions with and without various non-stationary temporal covariates are applied to annual maxima of daily discharge (AMAX) data sets in the Sudano-Guinean part of the Senegal River basin and in the Sahelian part of the Niger River basin. The data ranges from the 1950s to the 2010s. The two models of best fit most often selected (with an alpha=0.05 certainty level) were 1) a double-linear model for the central tendency parameter (μ) with stationary dispersion (σ) and 2) a double-linear model for both parameters. Change points are relatively consistent for the Senegal basin, with stations switching from a decreasing streamflow trend to an increasing streamflow trend in the early 1980s. In the Niger basin the trend in μ was generally positive with an increase in slope after the change point, but the change point location was less consistent. The study clearly demonstrates the significant trends in extreme discharge values in West Africa over the past six decades. Moreover, it proposes a clear methodology for comparing GEV models and selecting the best for use. The return levels generated from the chosen models can be applied to river basin management and hydraulic works sizing. The results provide a first evaluation of non-stationarity in extreme hydrological values in West Africa that is accompanied by significance levels, uncertainties, and non-stationary return level estimations .
A model-averaging method for assessing groundwater conceptual model uncertainty.
Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M
2010-01-01
This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.
NASA Astrophysics Data System (ADS)
Wu, Mousong; Sholze, Marko
2017-04-01
We investigated the importance of soil moisture data on assimilation of a terrestrial biosphere model (BETHY) for a long time period from 2010 to 2015. Totally, 101 parameters related to carbon turnover, soil respiration, as well as soil texture were selected for optimization within a carbon cycle data assimilation system (CCDAS). Soil moisture data from Soil Moisture and Ocean Salinity (SMOS) product was derived for 10 sites representing different plant function types (PFTs) as well as different climate zones. Uncertainty of SMOS soil moisture data was also estimated using triple collocation analysis (TCA) method by comparing with ASCAT dataset and BETHY forward simulation results. Assimilation of soil moisture to the system improved soil moisture as well as net primary productivity(NPP) and net ecosystem productivity (NEP) when compared with soil moisture derived from in-situ measurements and fluxnet datasets. Parameter uncertainties were largely reduced relatively to prior values. Using SMOS soil moisture data for assimilation of a terrestrial biosphere model proved to be an efficient approach in reducing uncertainty in ecosystem fluxes simulation. It could be further used in regional an global assimilation work to constrain carbon dioxide concentration simulation by combining with other sources of measurements.
On uncertainty quantification in hydrogeology and hydrogeophysics
NASA Astrophysics Data System (ADS)
Linde, Niklas; Ginsbourger, David; Irving, James; Nobile, Fabio; Doucet, Arnaud
2017-12-01
Recent advances in sensor technologies, field methodologies, numerical modeling, and inversion approaches have contributed to unprecedented imaging of hydrogeological properties and detailed predictions at multiple temporal and spatial scales. Nevertheless, imaging results and predictions will always remain imprecise, which calls for appropriate uncertainty quantification (UQ). In this paper, we outline selected methodological developments together with pioneering UQ applications in hydrogeology and hydrogeophysics. The applied mathematics and statistics literature is not easy to penetrate and this review aims at helping hydrogeologists and hydrogeophysicists to identify suitable approaches for UQ that can be applied and further developed to their specific needs. To bypass the tremendous computational costs associated with forward UQ based on full-physics simulations, we discuss proxy-modeling strategies and multi-resolution (Multi-level Monte Carlo) methods. We consider Bayesian inversion for non-linear and non-Gaussian state-space problems and discuss how Sequential Monte Carlo may become a practical alternative. We also describe strategies to account for forward modeling errors in Bayesian inversion. Finally, we consider hydrogeophysical inversion, where petrophysical uncertainty is often ignored leading to overconfident parameter estimation. The high parameter and data dimensions encountered in hydrogeological and geophysical problems make UQ a complicated and important challenge that has only been partially addressed to date.
A Bayesian Framework of Uncertainties Integration in 3D Geological Model
NASA Astrophysics Data System (ADS)
Liang, D.; Liu, X.
2017-12-01
3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.
Alderman, Phillip D.; Stanfill, Bryan
2016-10-06
Recent international efforts have brought renewed emphasis on the comparison of different agricultural systems models. Thus far, analysis of model-ensemble simulated results has not clearly differentiated between ensemble prediction uncertainties due to model structural differences per se and those due to parameter value uncertainties. Additionally, despite increasing use of Bayesian parameter estimation approaches with field-scale crop models, inadequate attention has been given to the full posterior distributions for estimated parameters. The objectives of this study were to quantify the impact of parameter value uncertainty on prediction uncertainty for modeling spring wheat phenology using Bayesian analysis and to assess the relativemore » contributions of model-structure-driven and parameter-value-driven uncertainty to overall prediction uncertainty. This study used a random walk Metropolis algorithm to estimate parameters for 30 spring wheat genotypes using nine phenology models based on multi-location trial data for days to heading and days to maturity. Across all cases, parameter-driven uncertainty accounted for between 19 and 52% of predictive uncertainty, while model-structure-driven uncertainty accounted for between 12 and 64%. Here, this study demonstrated the importance of quantifying both model-structure- and parameter-value-driven uncertainty when assessing overall prediction uncertainty in modeling spring wheat phenology. More generally, Bayesian parameter estimation provided a useful framework for quantifying and analyzing sources of prediction uncertainty.« less
The burden of typhoid fever in low- and middle-income countries: A meta-regression approach.
Antillón, Marina; Warren, Joshua L; Crawford, Forrest W; Weinberger, Daniel M; Kürüm, Esra; Pak, Gi Deok; Marks, Florian; Pitzer, Virginia E
2017-02-01
Upcoming vaccination efforts against typhoid fever require an assessment of the baseline burden of disease in countries at risk. There are no typhoid incidence data from most low- and middle-income countries (LMICs), so model-based estimates offer insights for decision-makers in the absence of readily available data. We developed a mixed-effects model fit to data from 32 population-based studies of typhoid incidence in 22 locations in 14 countries. We tested the contribution of economic and environmental indices for predicting typhoid incidence using a stochastic search variable selection algorithm. We performed out-of-sample validation to assess the predictive performance of the model. We estimated that 17.8 million cases of typhoid fever occur each year in LMICs (95% credible interval: 6.9-48.4 million). Central Africa was predicted to experience the highest incidence of typhoid, followed by select countries in Central, South, and Southeast Asia. Incidence typically peaked in the 2-4 year old age group. Models incorporating widely available economic and environmental indicators were found to describe incidence better than null models. Recent estimates of typhoid burden may under-estimate the number of cases and magnitude of uncertainty in typhoid incidence. Our analysis permits prediction of overall as well as age-specific incidence of typhoid fever in LMICs, and incorporates uncertainty around the model structure and estimates of the predictors. Future studies are needed to further validate and refine model predictions and better understand year-to-year variation in cases.
Sun, Wenchao; Ishidaira, Hiroshi; Bastola, Satish; Yu, Jingshan
2015-05-01
Lacking observation data for calibration constrains applications of hydrological models to estimate daily time series of streamflow. Recent improvements in remote sensing enable detection of river water-surface width from satellite observations, making possible the tracking of streamflow from space. In this study, a method calibrating hydrological models using river width derived from remote sensing is demonstrated through application to the ungauged Irrawaddy Basin in Myanmar. Generalized likelihood uncertainty estimation (GLUE) is selected as a tool for automatic calibration and uncertainty analysis. Of 50,000 randomly generated parameter sets, 997 are identified as behavioral, based on comparing model simulation with satellite observations. The uncertainty band of streamflow simulation can span most of 10-year average monthly observed streamflow for moderate and high flow conditions. Nash-Sutcliffe efficiency is 95.7% for the simulated streamflow at the 50% quantile. These results indicate that application to the target basin is generally successful. Beyond evaluating the method in a basin lacking streamflow data, difficulties and possible solutions for applications in the real world are addressed to promote future use of the proposed method in more ungauged basins. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Robust H∞ control of active vehicle suspension under non-stationary running
NASA Astrophysics Data System (ADS)
Guo, Li-Xin; Zhang, Li-Ping
2012-12-01
Due to complexity of the controlled objects, the selection of control strategies and algorithms in vehicle control system designs is an important task. Moreover, the control problem of automobile active suspensions has been become one of the important relevant investigations due to the constrained peculiarity and parameter uncertainty of mathematical models. In this study, after establishing the non-stationary road surface excitation model, a study on the active suspension control for non-stationary running condition was conducted using robust H∞ control and linear matrix inequality optimization. The dynamic equation of a two-degree-of-freedom quarter car model with parameter uncertainty was derived. The H∞ state feedback control strategy with time-domain hard constraints was proposed, and then was used to design the active suspension control system of the quarter car model. Time-domain analysis and parameter robustness analysis were carried out to evaluate the proposed controller stability. Simulation results show that the proposed control strategy has high systemic stability on the condition of non-stationary running and parameter uncertainty (including suspension mass, suspension stiffness and tire stiffness). The proposed control strategy can achieve a promising improvement on ride comfort and satisfy the requirements of dynamic suspension deflection, dynamic tire loads and required control forces within given constraints, as well as non-stationary running condition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cote, Benoit; Ritter, Christian; Oshea, Brian W.
Here we use a simple one-zone galactic chemical evolution model to quantify the uncertainties generated by the input parameters in numerical predictions for a galaxy with properties similar to those of the Milky Way. We compiled several studies from the literature to gather the current constraints for our simulations regarding the typical value and uncertainty of the following seven basic parameters: the lower and upper mass limits of the stellar initial mass function (IMF), the slope of the high-mass end of the stellar IMF, the slope of the delay-time distribution function of Type Ia supernovae (SNe Ia), the number ofmore » SNe Ia per M ⊙ formed, the total stellar mass formed, and the final mass of gas. We derived a probability distribution function to express the range of likely values for every parameter, which were then included in a Monte Carlo code to run several hundred simulations with randomly selected input parameters. This approach enables us to analyze the predicted chemical evolution of 16 elements in a statistical manner by identifying the most probable solutions along with their 68% and 95% confidence levels. Our results show that the overall uncertainties are shaped by several input parameters that individually contribute at different metallicities, and thus at different galactic ages. The level of uncertainty then depends on the metallicity and is different from one element to another. Among the seven input parameters considered in this work, the slope of the IMF and the number of SNe Ia are currently the two main sources of uncertainty. The thicknesses of the uncertainty bands bounded by the 68% and 95% confidence levels are generally within 0.3 and 0.6 dex, respectively. When looking at the evolution of individual elements as a function of galactic age instead of metallicity, those same thicknesses range from 0.1 to 0.6 dex for the 68% confidence levels and from 0.3 to 1.0 dex for the 95% confidence levels. The uncertainty in our chemical evolution model does not include uncertainties relating to stellar yields, star formation and merger histories, and modeling assumptions.« less
Model structures amplify uncertainty in predicted soil carbon responses to climate change.
Shi, Zheng; Crowell, Sean; Luo, Yiqi; Moore, Berrien
2018-06-04
Large model uncertainty in projected future soil carbon (C) dynamics has been well documented. However, our understanding of the sources of this uncertainty is limited. Here we quantify the uncertainties arising from model parameters, structures and their interactions, and how those uncertainties propagate through different models to projections of future soil carbon stocks. Both the vertically resolved model and the microbial explicit model project much greater uncertainties to climate change than the conventional soil C model, with both positive and negative C-climate feedbacks, whereas the conventional model consistently predicts positive soil C-climate feedback. Our findings suggest that diverse model structures are necessary to increase confidence in soil C projection. However, the larger uncertainty in the complex models also suggests that we need to strike a balance between model complexity and the need to include diverse model structures in order to forecast soil C dynamics with high confidence and low uncertainty.
NASA Astrophysics Data System (ADS)
Nilfouroushan, F.; Pysklywec, R.; Cruden, S.
2009-05-01
Cohesionless or very low cohesion granular materials are widely used in analogue/physical models to simulate brittle rocks in the upper crust. Selection of materials with appropriate cohesion values in such models is important for the simulation of the dynamics of brittle rock deformation in nature. Uncertainties in the magnitude of cohesion (due to measurement errors, extrapolations at low normal stresses, or model setup) in laboratory experiments can possibly result in misinterpretation of the styles and mechanisms of deformation in natural fold-and thrust belts. We ran a series of 2-D numerical models to investigate systematically the effect of cohesion uncertainties on the evolution of models of fold-and-thrust belts. The analyses employ SOPALE, a geodynamic code based on the arbitrary Lagrangian-Eulerian (ALE) finite element method. Similar to analogue models, the material properties of sand and transparent silicone (PDMS) are used to simulate brittle and viscous behaviors of upper crustal rocks. The suite of scaled brittle and brittle-viscous numerical experiments have the same initial geometry but the cohesion value of the brittle layers is increased systematically from 0 to 100 Pa. The stress and strain distribution in different sets of models with different cohesion values are compared and analyzed. The kinematics and geometry of thrust wedges including the location and number of foreland- and hinterland- verging thrust faults, pop-up structures, tapers and topography are also explored and their sensitivity to cohesion value is discussed.
Model parameter uncertainty analysis for an annual field-scale P loss model
NASA Astrophysics Data System (ADS)
Bolster, Carl H.; Vadas, Peter A.; Boykin, Debbie
2016-08-01
Phosphorous (P) fate and transport models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. Because all models are simplifications of complex systems, there will exist an inherent amount of uncertainty associated with their predictions. It is therefore important that efforts be directed at identifying, quantifying, and communicating the different sources of model uncertainties. In this study, we conducted an uncertainty analysis with the Annual P Loss Estimator (APLE) model. Our analysis included calculating parameter uncertainties and confidence and prediction intervals for five internal regression equations in APLE. We also estimated uncertainties of the model input variables based on values reported in the literature. We then predicted P loss for a suite of fields under different management and climatic conditions while accounting for uncertainties in the model parameters and inputs and compared the relative contributions of these two sources of uncertainty to the overall uncertainty associated with predictions of P loss. Both the overall magnitude of the prediction uncertainties and the relative contributions of the two sources of uncertainty varied depending on management practices and field characteristics. This was due to differences in the number of model input variables and the uncertainties in the regression equations associated with each P loss pathway. Inspection of the uncertainties in the five regression equations brought attention to a previously unrecognized limitation with the equation used to partition surface-applied fertilizer P between leaching and runoff losses. As a result, an alternate equation was identified that provided similar predictions with much less uncertainty. Our results demonstrate how a thorough uncertainty and model residual analysis can be used to identify limitations with a model. Such insight can then be used to guide future data collection and model development and evaluation efforts.
NASA Astrophysics Data System (ADS)
Vallam, P.; Qin, X. S.
2017-10-01
Anthropogenic-driven climate change would affect the global ecosystem and is becoming a world-wide concern. Numerous studies have been undertaken to determine the future trends of meteorological variables at different scales. Despite these studies, there remains significant uncertainty in the prediction of future climates. To examine the uncertainty arising from using different schemes to downscale the meteorological variables for the future horizons, projections from different statistical downscaling schemes were examined. These schemes included statistical downscaling method (SDSM), change factor incorporated with LARS-WG, and bias corrected disaggregation (BCD) method. Global circulation models (GCMs) based on CMIP3 (HadCM3) and CMIP5 (CanESM2) were utilized to perturb the changes in the future climate. Five study sites (i.e., Alice Springs, Edmonton, Frankfurt, Miami, and Singapore) with diverse climatic conditions were chosen for examining the spatial variability of applying various statistical downscaling schemes. The study results indicated that the regions experiencing heavy precipitation intensities were most likely to demonstrate the divergence between the predictions from various statistical downscaling methods. Also, the variance computed in projecting the weather extremes indicated the uncertainty derived from selection of downscaling tools and climate models. This study could help gain an improved understanding about the features of different downscaling approaches and the overall downscaling uncertainty.
Model Selection and Accounting for Model Uncertainty in Graphical Models Using OCCAM’s Window
1991-07-22
mental work; C, strenuous physical work; D, systolic blood pressure: E. ratio of 13 and Qt proteins; F, family anamnesis of coronary heart disease...of F, family anamnesis . The models are shown in Figure 4. 12 Table 1: Risk factors for Coronary lfeart Disea:W B No Yes A No Yes No Yes F E D C...a link from smoking (A) to systolic blood pressure (D). There is decisive evidence in favour of the marginal independence of family anamnesis of
NASA Astrophysics Data System (ADS)
Basu, Sourish; Baker, David F.; Chevallier, Frédéric; Patra, Prabir K.; Liu, Junjie; Miller, John B.
2018-05-01
We estimate the uncertainty of CO2 flux estimates in atmospheric inversions stemming from differences between different global transport models. Using a set of observing system simulation experiments (OSSEs), we estimate this uncertainty as represented by the spread between five different state-of-the-art global transport models (ACTM, LMDZ, GEOS-Chem, PCTM and TM5), for both traditional in situ CO2 inversions and inversions of XCO2 estimates from the Orbiting Carbon Observatory 2 (OCO-2). We find that, in the absence of relative biases between in situ CO2 and OCO-2 XCO2, OCO-2 estimates of terrestrial flux for TRANSCOM-scale land regions can be more robust to transport model differences than corresponding in situ CO2 inversions. This is due to a combination of the increased spatial coverage of OCO-2 samples and the total column nature of OCO-2 estimates. We separate the two effects by constructing hypothetical in situ networks with the coverage of OCO-2 but with only near-surface samples. We also find that the transport-driven uncertainty in fluxes is comparable between well-sampled northern temperate regions and poorly sampled tropical regions. Furthermore, we find that spatiotemporal differences in sampling, such as between OCO-2 land and ocean soundings, coupled with imperfect transport, can produce differences in flux estimates that are larger than flux uncertainties due to transport model differences. This highlights the need for sampling with as complete a spatial and temporal coverage as possible (e.g., using both land and ocean retrievals together for OCO-2) to minimize the impact of selective sampling. Finally, our annual and monthly estimates of transport-driven uncertainties can be used to evaluate the robustness of conclusions drawn from real OCO-2 and in situ CO2 inversions.
NASA Astrophysics Data System (ADS)
He, Xin; Koch, Julian; Sonnenborg, Torben O.; Jørgensen, Flemming; Schamper, Cyril; Christian Refsgaard, Jens
2014-04-01
Geological heterogeneity is a very important factor to consider when developing geological models for hydrological purposes. Using statistically based stochastic geological simulations, the spatial heterogeneity in such models can be accounted for. However, various types of uncertainties are associated with both the geostatistical method and the observation data. In the present study, TProGS is used as the geostatistical modeling tool to simulate structural heterogeneity for glacial deposits in a head water catchment in Denmark. The focus is on how the observation data uncertainty can be incorporated in the stochastic simulation process. The study uses two types of observation data: borehole data and airborne geophysical data. It is commonly acknowledged that the density of the borehole data is usually too sparse to characterize the horizontal heterogeneity. The use of geophysical data gives an unprecedented opportunity to obtain high-resolution information and thus to identify geostatistical properties more accurately especially in the horizontal direction. However, since such data are not a direct measurement of the lithology, larger uncertainty of point estimates can be expected as compared to the use of borehole data. We have proposed a histogram probability matching method in order to link the information on resistivity to hydrofacies, while considering the data uncertainty at the same time. Transition probabilities and Markov Chain models are established using the transformed geophysical data. It is shown that such transformation is in fact practical; however, the cutoff value for dividing the resistivity data into facies is difficult to determine. The simulated geological realizations indicate significant differences of spatial structure depending on the type of conditioning data selected. It is to our knowledge the first time that grid-to-grid airborne geophysical data including the data uncertainty are used in conditional geostatistical simulations in TProGS. Therefore, it provides valuable insights regarding the advantages and challenges of using such comprehensive data.
Szekér, Szabolcs; Vathy-Fogarassy, Ágnes
2018-01-01
Logistic regression based propensity score matching is a widely used method in case-control studies to select the individuals of the control group. This method creates a suitable control group if all factors affecting the output variable are known. However, if relevant latent variables exist as well, which are not taken into account during the calculations, the quality of the control group is uncertain. In this paper, we present a statistics-based research in which we try to determine the relationship between the accuracy of the logistic regression model and the uncertainty of the dependent variable of the control group defined by propensity score matching. Our analyses show that there is a linear correlation between the fit of the logistic regression model and the uncertainty of the output variable. In certain cases, a latent binary explanatory variable can result in a relative error of up to 70% in the prediction of the outcome variable. The observed phenomenon calls the attention of analysts to an important point, which must be taken into account when deducting conclusions.
Real-time flood forecasts & risk assessment using a possibility-theory based fuzzy neural network
NASA Astrophysics Data System (ADS)
Khan, U. T.
2016-12-01
Globally floods are one of the most devastating natural disasters and improved flood forecasting methods are essential for better flood protection in urban areas. Given the availability of high resolution real-time datasets for flood variables (e.g. streamflow and precipitation) in many urban areas, data-driven models have been effectively used to predict peak flow rates in river; however, the selection of input parameters for these types of models is often subjective. Additionally, the inherit uncertainty associated with data models along with errors in extreme event observations means that uncertainty quantification is essential. Addressing these concerns will enable improved flood forecasting methods and provide more accurate flood risk assessments. In this research, a new type of data-driven model, a quasi-real-time updating fuzzy neural network is developed to predict peak flow rates in urban riverine watersheds. A possibility-to-probability transformation is first used to convert observed data into fuzzy numbers. A possibility theory based training regime is them used to construct the fuzzy parameters and the outputs. A new entropy-based optimisation criterion is used to train the network. Two existing methods to select the optimum input parameters are modified to account for fuzzy number inputs, and compared. These methods are: Entropy-Wavelet-based Artificial Neural Network (EWANN) and Combined Neural Pathway Strength Analysis (CNPSA). Finally, an automated algorithm design to select the optimum structure of the neural network is implemented. The overall impact of each component of training this network is to replace the traditional ad hoc network configuration methods, with one based on objective criteria. Ten years of data from the Bow River in Calgary, Canada (including two major floods in 2005 and 2013) are used to calibrate and test the network. The EWANN method selected lagged peak flow as a candidate input, whereas the CNPSA method selected lagged precipitation and lagged mean daily flow as candidate inputs. Model performance metric show that the CNPSA method had higher performance (with an efficiency of 0.76). Model output was used to assess the risk of extreme peak flows for a given day using an inverse possibility-to-probability transformation.
Slavinskaya, N. A.; Abbasi, M.; Starcke, J. H.; ...
2017-01-24
An automated data-centric infrastructure, Process Informatics Model (PrIMe), was applied to validation and optimization of a syngas combustion model. The Bound-to-Bound Data Collaboration (B2BDC) module of PrIMe was employed to discover the limits of parameter modifications based on uncertainty quantification (UQ) and consistency analysis of the model–data system and experimental data, including shock-tube ignition delay times and laminar flame speeds. Existing syngas reaction models are reviewed, and the selected kinetic data are described in detail. Empirical rules were developed and applied to evaluate the uncertainty bounds of the literature experimental data. Here, the initial H 2/CO reaction model, assembled frommore » 73 reactions and 17 species, was subjected to a B2BDC analysis. For this purpose, a dataset was constructed that included a total of 167 experimental targets and 55 active model parameters. Consistency analysis of the composed dataset revealed disagreement between models and data. Further analysis suggested that removing 45 experimental targets, 8 of which were self-inconsistent, would lead to a consistent dataset. This dataset was subjected to a correlation analysis, which highlights possible directions for parameter modification and model improvement. Additionally, several methods of parameter optimization were applied, some of them unique to the B2BDC framework. The optimized models demonstrated improved agreement with experiments compared to the initially assembled model, and their predictions for experiments not included in the initial dataset (i.e., a blind prediction) were investigated. The results demonstrate benefits of applying the B2BDC methodology for developing predictive kinetic models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slavinskaya, N. A.; Abbasi, M.; Starcke, J. H.
An automated data-centric infrastructure, Process Informatics Model (PrIMe), was applied to validation and optimization of a syngas combustion model. The Bound-to-Bound Data Collaboration (B2BDC) module of PrIMe was employed to discover the limits of parameter modifications based on uncertainty quantification (UQ) and consistency analysis of the model–data system and experimental data, including shock-tube ignition delay times and laminar flame speeds. Existing syngas reaction models are reviewed, and the selected kinetic data are described in detail. Empirical rules were developed and applied to evaluate the uncertainty bounds of the literature experimental data. Here, the initial H 2/CO reaction model, assembled frommore » 73 reactions and 17 species, was subjected to a B2BDC analysis. For this purpose, a dataset was constructed that included a total of 167 experimental targets and 55 active model parameters. Consistency analysis of the composed dataset revealed disagreement between models and data. Further analysis suggested that removing 45 experimental targets, 8 of which were self-inconsistent, would lead to a consistent dataset. This dataset was subjected to a correlation analysis, which highlights possible directions for parameter modification and model improvement. Additionally, several methods of parameter optimization were applied, some of them unique to the B2BDC framework. The optimized models demonstrated improved agreement with experiments compared to the initially assembled model, and their predictions for experiments not included in the initial dataset (i.e., a blind prediction) were investigated. The results demonstrate benefits of applying the B2BDC methodology for developing predictive kinetic models.« less
NASA Astrophysics Data System (ADS)
Vidal, Jean-Philippe; Hingray, Benoît
2014-05-01
In order to better understand the uncertainties in the climate of the next decades, an increasingly large number of increasingly diverse climate projections is being produced by the climate research community through coordinated initiatives (e.g., CMIP5, CORDEX), but also through more specific experiments at both the global scale (perturbed parameter ensembles) and the regional-to-local scale (empirical statistical downscaling ensembles). When significant efforts are put into making such projections available online, very few works focus on how to make such an enormous amount of information actually usable by the impact and adaptation community. Climate services should therefore include guidelines and recommendations for identifying subsets of climate projections that would have (1) a size manageable by downstream modelling approaches and (2) the relevant properties for informing adaptation strategies. This works proposes a generic framework for identifying tailored subsets of climate projections that would meet both the objectives and the constraints of a specific impact / adaptation study in a typical top-down approach. This decision framework builds on two main preliminary tasks that lead to critical choices in the selection strategy: (1) understanding the requirements of the specific impact / adaptation study, and (2) characterizing the (downscaled) climate projections dataset available. An impact / adaptation study has two types of requirements. First, the study may aim at various outcomes for a given climate-related feature: the best estimate of the future, the range of possible futures, a set of representative futures, or a statistically interpretable ensemble of futures. Second, impact models may come with specific constraints on climate input variables, like spatio-temporal and between-variables coherence. Additionally, when concurrent impact models are used, the most restrictive constraints have to be considered in order to be able to assess the uncertainty associated from this modelling step. Besides, the climate projection dataset available for a given study has several characteristics that will heavily condition the type of conclusions that can be reached. Indeed, the dataset at hand may or not sample different types of uncertainty (socio-economic, structural, parametric, along with internal variability). Moreover, these types are present at different steps in the well-known cascade of uncertainty, from the emission / concentration scenarios and the global climate to the regional-to-local climate. Critical choices for the selection are therefore conditioned on all features above. The type of selection (picking out, culling, or statistical sampling) is closely related to the study objectives and the uncertainty types present in the dataset. Moreover, grounds for picking out or culling projections may stem from global, regional or feature-specific present-day performance, representativeness, or covered range. An example use of this framework is a hierarchical selection for 3 classes of impact models among 3000 transient climate projections from different runs of 4 GCMs, statistically downscaled by 3 probabilistic methods, and made available for an integrated water resource adaptation study in the Durance catchment (southern French Alps). This work is part of the GICC R2D2-20501 project (Risk, water Resources and sustainable Development of the Durance catchment in 2050) and the EU FP7 COMPLEX2 project (Knowledge Based Climate Mitigation Systems for a Low Carbon Economy).
NASA Astrophysics Data System (ADS)
Papaioannou, George; Vasiliades, Lampros; Loukas, Athanasios; Aronica, Giuseppe T.
2017-04-01
Probabilistic flood inundation mapping is performed and analysed at the ungauged Xerias stream reach, Volos, Greece. The study evaluates the uncertainty introduced by the roughness coefficient values on hydraulic models in flood inundation modelling and mapping. The well-established one-dimensional (1-D) hydraulic model, HEC-RAS is selected and linked to Monte-Carlo simulations of hydraulic roughness. Terrestrial Laser Scanner data have been used to produce a high quality DEM for input data uncertainty minimisation and to improve determination accuracy on stream channel topography required by the hydraulic model. Initial Manning's n roughness coefficient values are based on pebble count field surveys and empirical formulas. Various theoretical probability distributions are fitted and evaluated on their accuracy to represent the estimated roughness values. Finally, Latin Hypercube Sampling has been used for generation of different sets of Manning roughness values and flood inundation probability maps have been created with the use of Monte Carlo simulations. Historical flood extent data, from an extreme historical flash flood event, are used for validation of the method. The calibration process is based on a binary wet-dry reasoning with the use of Median Absolute Percentage Error evaluation metric. The results show that the proposed procedure supports probabilistic flood hazard mapping at ungauged rivers and provides water resources managers with valuable information for planning and implementing flood risk mitigation strategies.
NASA Technical Reports Server (NTRS)
Shirai, T.; Ishizawa, M.; Zhuravlev, R.; Ganshin, A.; Belikov, D.; Saito, M.; Oda, T.; Valsala, V.; Gomez-Pelaez, A. J.; Langenfelds, R.;
2017-01-01
We present an assimilation system for atmospheric carbon dioxide (CO2) using a Global Eulerian-Lagrangian Coupled Atmospheric model (GELCA), and demonstrate its capability to capture the observed atmospheric CO2 mixing ratios and to estimate CO2 fluxes. With the efficient data handling scheme in GELCA, our system assimilates non-smoothed CO2 data from observational data products such as the Observation Package (ObsPack) data products as constraints on surface fluxes. We conducted sensitivity tests to examine the impact of the site selections and the prior uncertainty settings of observation on the inversion results. For these sensitivity tests, we made five different sitedata selections from the ObsPack product. In all cases, the time series of the global net CO2 flux to the atmosphere stayed close to values calculated from the growth rate of the observed global mean atmospheric CO2 mixing ratio. At regional scales, estimated seasonal CO2 fluxes were altered, depending on the CO2 data selected for assimilation. Uncertainty reductions (URs) were determined at the regional scale and compared among cases. As measures of the model-data mismatch, we used the model-data bias, root-mean-square error, and the linear correlation. For most observation sites, the model-data mismatch was reasonably small. Regarding regional flux estimates, tropical Asia was one of the regions that showed a significant impact from the observation network settings. We found that the surface fluxes in tropical Asia were the most sensitive to the use of aircraft measurements over the Pacific, and the seasonal cycle agreed better with the results of bottom-up studies when the aircraft measurements were assimilated. These results confirm the importance of these aircraft observations, especially for constraining surface fluxes in the tropics.
Huijbregts, Mark A J; Gilijamse, Wim; Ragas, Ad M J; Reijnders, Lucas
2003-06-01
The evaluation of uncertainty is relatively new in environmental life-cycle assessment (LCA). It provides useful information to assess the reliability of LCA-based decisions and to guide future research toward reducing uncertainty. Most uncertainty studies in LCA quantify only one type of uncertainty, i.e., uncertainty due to input data (parameter uncertainty). However, LCA outcomes can also be uncertain due to normative choices (scenario uncertainty) and the mathematical models involved (model uncertainty). The present paper outlines a new methodology that quantifies parameter, scenario, and model uncertainty simultaneously in environmental life-cycle assessment. The procedure is illustrated in a case study that compares two insulation options for a Dutch one-family dwelling. Parameter uncertainty was quantified by means of Monte Carlo simulation. Scenario and model uncertainty were quantified by resampling different decision scenarios and model formulations, respectively. Although scenario and model uncertainty were not quantified comprehensively, the results indicate that both types of uncertainty influence the case study outcomes. This stresses the importance of quantifying parameter, scenario, and model uncertainty simultaneously. The two insulation options studied were found to have significantly different impact scores for global warming, stratospheric ozone depletion, and eutrophication. The thickest insulation option has the lowest impact on global warming and eutrophication, and the highest impact on stratospheric ozone depletion.
Uncertainty and Sensitivity Analyses of a Pebble Bed HTGR Loss of Cooling Event
Strydom, Gerhard
2013-01-01
The Very High Temperature Reactor Methods Development group at the Idaho National Laboratory identified the need for a defensible and systematic uncertainty and sensitivity approach in 2009. This paper summarizes the results of an uncertainty and sensitivity quantification investigation performed with the SUSA code, utilizing the International Atomic Energy Agency CRP 5 Pebble Bed Modular Reactor benchmark and the INL code suite PEBBED-THERMIX. Eight model input parameters were selected for inclusion in this study, and after the input parameters variations and probability density functions were specified, a total of 800 steady state and depressurized loss of forced cooling (DLOFC) transientmore » PEBBED-THERMIX calculations were performed. The six data sets were statistically analyzed to determine the 5% and 95% DLOFC peak fuel temperature tolerance intervals with 95% confidence levels. It was found that the uncertainties in the decay heat and graphite thermal conductivities were the most significant contributors to the propagated DLOFC peak fuel temperature uncertainty. No significant differences were observed between the results of Simple Random Sampling (SRS) or Latin Hypercube Sampling (LHS) data sets, and use of uniform or normal input parameter distributions also did not lead to any significant differences between these data sets.« less
Rieger, TR; Musante, CJ
2016-01-01
Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed “virtual patients.” In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations. PMID:27069777
Maximum entropy perception-action space: a Bayesian model of eye movement selection
NASA Astrophysics Data System (ADS)
Colas, Francis; Bessière, Pierre; Girard, Benoît
2011-03-01
In this article, we investigate the issue of the selection of eye movements in a free-eye Multiple Object Tracking task. We propose a Bayesian model of retinotopic maps with a complex logarithmic mapping. This model is structured in two parts: a representation of the visual scene, and a decision model based on the representation. We compare different decision models based on different features of the representation and we show that taking into account uncertainty helps predict the eye movements of subjects recorded in a psychophysics experiment. Finally, based on experimental data, we postulate that the complex logarithmic mapping has a functional relevance, as the density of objects in this space in more uniform than expected. This may indicate that the representation space and control strategies are such that the object density is of maximum entropy.
Uncertainty Modeling for Structural Control Analysis and Synthesis
NASA Technical Reports Server (NTRS)
Campbell, Mark E.; Crawley, Edward F.
1996-01-01
The development of an accurate model of uncertainties for the control of structures that undergo a change in operational environment, based solely on modeling and experimentation in the original environment is studied. The application used throughout this work is the development of an on-orbit uncertainty model based on ground modeling and experimentation. A ground based uncertainty model consisting of mean errors and bounds on critical structural parameters is developed. The uncertainty model is created using multiple data sets to observe all relevant uncertainties in the system. The Discrete Extended Kalman Filter is used as an identification/parameter estimation method for each data set, in addition to providing a covariance matrix which aids in the development of the uncertainty model. Once ground based modal uncertainties have been developed, they are localized to specific degrees of freedom in the form of mass and stiffness uncertainties. Two techniques are presented: a matrix method which develops the mass and stiffness uncertainties in a mathematical manner; and a sensitivity method which assumes a form for the mass and stiffness uncertainties in macroelements and scaling factors. This form allows the derivation of mass and stiffness uncertainties in a more physical manner. The mass and stiffness uncertainties of the ground based system are then mapped onto the on-orbit system, and projected to create an analogous on-orbit uncertainty model in the form of mean errors and bounds on critical parameters. The Middeck Active Control Experiment is introduced as experimental verification for the localization and projection methods developed. In addition, closed loop results from on-orbit operations of the experiment verify the use of the uncertainty model for control analysis and synthesis in space.
NASA Astrophysics Data System (ADS)
Seo, Seung Beom; Kim, Young-Oh; Kim, Youngil; Eum, Hyung-Il
2018-04-01
When selecting a subset of climate change scenarios (GCM models), the priority is to ensure that the subset reflects the comprehensive range of possible model results for all variables concerned. Though many studies have attempted to improve the scenario selection, there is a lack of studies that discuss methods to ensure that the results from a subset of climate models contain the same range of uncertainty in hydrologic variables as when all models are considered. We applied the Katsavounidis-Kuo-Zhang (KKZ) algorithm to select a subset of climate change scenarios and demonstrated its ability to reduce the number of GCM models in an ensemble, while the ranges of multiple climate extremes indices were preserved. First, we analyzed the role of 27 ETCCDI climate extremes indices for scenario selection and selected the representative climate extreme indices. Before the selection of a subset, we excluded a few deficient GCM models that could not represent the observed climate regime. Subsequently, we discovered that a subset of GCM models selected by the KKZ algorithm with the representative climate extreme indices could not capture the full potential range of changes in hydrologic extremes (e.g., 3-day peak flow and 7-day low flow) in some regional case studies. However, the application of the KKZ algorithm with a different set of climate indices, which are correlated to the hydrologic extremes, enabled the overcoming of this limitation. Key climate indices, dependent on the hydrologic extremes to be projected, must therefore be determined prior to the selection of a subset of GCM models.
An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan, E-mail: weixuan.li@usc.edu; Lin, Guang, E-mail: guang.lin@pnnl.gov; Zhang, Dongxiao, E-mail: dxz@pku.edu.cn
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functionsmore » is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less
An Adaptive ANOVA-based PCKF for High-Dimensional Nonlinear Inverse Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
LI, Weixuan; Lin, Guang; Zhang, Dongxiao
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos bases in the expansion helps to capture uncertainty more accurately but increases computational cost. Bases selection is particularly importantmore » for high-dimensional stochastic problems because the number of polynomial chaos bases required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE bases are pre-set based on users’ experience. Also, for sequential data assimilation problems, the bases kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE bases for different problems and automatically adjusts the number of bases in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm is tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less
Hypersonic vehicle model and control law development using H(infinity) and micron synthesis
NASA Astrophysics Data System (ADS)
Gregory, Irene M.; Chowdhry, Rajiv S.; McMinn, John D.; Shaughnessy, John D.
1994-10-01
The control system design for a Single Stage To Orbit (SSTO) air breathing vehicle will be central to a successful mission because a precise ascent trajectory will preserve narrow payload margins. The air breathing propulsion system requires the vehicle to fly roughly halfway around the Earth through atmospheric turbulence. The turbulence, the high sensitivity of the propulsion system to inlet flow conditions, the relatively large uncertainty of the parameters characterizing the vehicle, and continuous acceleration make the problem especially challenging. Adequate stability margins must be provided without sacrificing payload mass since payload margins are critical. Therefore, a multivariable control theory capable of explicitly including both uncertainty and performance is needed. The H(infinity) controller in general provides good robustness but can result in conservative solutions for practical problems involving structured uncertainty. Structured singular value mu framework for analysis and synthesis is potentially much less conservative and hence more appropriate for problems with tight margins. An SSTO control system requires: highly accurate tracking of velocity and altitude commands while limiting angle-of-attack oscillations, minimized control power usage, and a stabilized vehicle when atmospheric turbulence and system uncertainty are present. The controller designs using H(infinity) and mu-synthesis procedures were compared. An integrated flight/propulsion dynamic mathematical model of a conical accelerator vehicle was linearized as the vehicle accelerated through Mach 8. Vehicle acceleration through the selected flight condition gives rise to parametric variation that was modeled as a structured uncertainty. The mu-analysis approach was used in the frequency domain to conduct controller analysis and was confirmed by time history plots. Results demonstrate the inherent advantages of the mu framework for this class of problems.
Hypersonic vehicle model and control law development using H(infinity) and micron synthesis
NASA Technical Reports Server (NTRS)
Gregory, Irene M.; Chowdhry, Rajiv S.; Mcminn, John D.; Shaughnessy, John D.
1994-01-01
The control system design for a Single Stage To Orbit (SSTO) air breathing vehicle will be central to a successful mission because a precise ascent trajectory will preserve narrow payload margins. The air breathing propulsion system requires the vehicle to fly roughly halfway around the Earth through atmospheric turbulence. The turbulence, the high sensitivity of the propulsion system to inlet flow conditions, the relatively large uncertainty of the parameters characterizing the vehicle, and continuous acceleration make the problem especially challenging. Adequate stability margins must be provided without sacrificing payload mass since payload margins are critical. Therefore, a multivariable control theory capable of explicitly including both uncertainty and performance is needed. The H(infinity) controller in general provides good robustness but can result in conservative solutions for practical problems involving structured uncertainty. Structured singular value mu framework for analysis and synthesis is potentially much less conservative and hence more appropriate for problems with tight margins. An SSTO control system requires: highly accurate tracking of velocity and altitude commands while limiting angle-of-attack oscillations, minimized control power usage, and a stabilized vehicle when atmospheric turbulence and system uncertainty are present. The controller designs using H(infinity) and mu-synthesis procedures were compared. An integrated flight/propulsion dynamic mathematical model of a conical accelerator vehicle was linearized as the vehicle accelerated through Mach 8. Vehicle acceleration through the selected flight condition gives rise to parametric variation that was modeled as a structured uncertainty. The mu-analysis approach was used in the frequency domain to conduct controller analysis and was confirmed by time history plots. Results demonstrate the inherent advantages of the mu framework for this class of problems.
Characterising Tidal Flow Within AN Energetic Tidal Environment
NASA Astrophysics Data System (ADS)
Neill, S. P.; Goward Brown, A.; Lewis, M. J.
2016-02-01
The Pentland Firth is a highly energetic and complex tidal strait separating the north of Scotland with the Orkney Islands and is a key location for tidal energy exploitation. Topographic features including islands and headlands, combined with bathymetric complexities within the Pentland Firth create turbulent hydrodynamic flows which are difficult to observe. Site selection in tidal energy environments historically focuses on tidal current magnitude. Without consideration for the more complex hydrodynamics of tidal energy environments tidal energy developers may miss the opportunity to tune their devices or create environment specific tidal energy converters in order to harness the greatest potential from site. Fully characterising these tidal energy environments ensures economic energy extraction. Understanding the interaction of energy extraction with the environment will reduce uncertainty in site selection and allow mitigation of any potential environmental concerns. We apply the 3D ROMS model to the Pentland Firth with the aim of resolving uncertainties within tidal energy resource assessment. Flow magnitudes and directions are examined with a focus on tidal phasing and asymmetry and application to sediment dynamics. Using the ROMS model, it is possible to determine the extent to which the tidal resource varies temporally and spatially with tidal energy extraction. Accurately modelling the tidal dynamics within this environment ensures that potential consequences of tidal energy extraction on the surrounding environment are better understood.
Adapting wheat to uncertain future
NASA Astrophysics Data System (ADS)
Semenov, Mikhail; Stratonovitch, Pierre
2015-04-01
This study describes integration of climate change projections from the Coupled Model Intercomparison Project Phase 5 (CMIP5) multi-model ensemble with the LARS-WG weather generator, which delivers an attractive option for downscaling of large-scale climate projections from global climate models (GCMs) to local-scale climate scenarios for impact assessments. A subset of 18 GCMs from the CMIP5 ensemble and 2 RCPs, RCP4.5 and RCP8.5, were integrated with LARS-WG. Climate sensitivity indexes for temperature and precipitation were computed for all GCMs and for 21 regions in the world. For computationally demanding impact assessments, where it is not practical to explore all possible combinations of GCM × RCP, climate sensitivity indexes could be used to select a subset of GCMs from CMIP5 with contrasting climate sensitivity. This would allow to quantify uncertainty in impacts resulting from the CMIP5 ensemble by conducting fewer simulation experiments. As an example, an in silico design of wheat ideotype optimised for future climate scenarios in Europe was described. Two contrasting GCMs were selected for the analysis, "hot" HadGEM2-ES and "cool" GISS-E2-R-CC, along with 2 RCPs. Despite large uncertainty in climate projections, several wheat traits were identified as beneficial for the high-yielding wheat ideotypes that could be used as targets for wheat improvement by breeders.
Vuust, Peter; Pearce, Marcus
2016-01-01
Musical expertise entails meticulous stylistic specialisation and enculturation. Even so, research on musical training effects has focused on generalised comparisons between musicians and non-musicians, and cross-cultural work addressing specialised expertise has traded cultural specificity and sensitivity for other methodological limitations. This study aimed to experimentally dissociate the effects of specialised stylistic training and general musical expertise on the perception of melodies. Non-musicians and professional musicians specialising in classical music or jazz listened to sampled renditions of saxophone solos improvised by Charlie Parker in the bebop style. Ratings of explicit uncertainty and expectedness for different continuations of each melodic excerpt were collected. An information-theoretic model of expectation enabled selection of stimuli affording highly certain continuations in the bebop style, but highly uncertain continuations in the context of general tonal expectations, and vice versa. The results showed that expert musicians have acquired probabilistic characteristics of music influencing their experience of expectedness and predictive uncertainty. While classical musicians had internalised key aspects of the bebop style implicitly, only jazz musicians’ explicit uncertainty ratings reflected the computational estimates, and jazz-specific expertise modulated the relationship between explicit and inferred uncertainty data. In spite of this, there was no evidence that non-musicians and classical musicians used a stylistically irrelevant cognitive model of general tonal music providing support for the theory of cognitive firewalls between stylistic models in predictive processing of music. PMID:27732612
Hansen, Niels Chr; Vuust, Peter; Pearce, Marcus
2016-01-01
Musical expertise entails meticulous stylistic specialisation and enculturation. Even so, research on musical training effects has focused on generalised comparisons between musicians and non-musicians, and cross-cultural work addressing specialised expertise has traded cultural specificity and sensitivity for other methodological limitations. This study aimed to experimentally dissociate the effects of specialised stylistic training and general musical expertise on the perception of melodies. Non-musicians and professional musicians specialising in classical music or jazz listened to sampled renditions of saxophone solos improvised by Charlie Parker in the bebop style. Ratings of explicit uncertainty and expectedness for different continuations of each melodic excerpt were collected. An information-theoretic model of expectation enabled selection of stimuli affording highly certain continuations in the bebop style, but highly uncertain continuations in the context of general tonal expectations, and vice versa. The results showed that expert musicians have acquired probabilistic characteristics of music influencing their experience of expectedness and predictive uncertainty. While classical musicians had internalised key aspects of the bebop style implicitly, only jazz musicians' explicit uncertainty ratings reflected the computational estimates, and jazz-specific expertise modulated the relationship between explicit and inferred uncertainty data. In spite of this, there was no evidence that non-musicians and classical musicians used a stylistically irrelevant cognitive model of general tonal music providing support for the theory of cognitive firewalls between stylistic models in predictive processing of music.
Space-Time Data fusion for Remote Sensing Applications
NASA Technical Reports Server (NTRS)
Braverman, Amy; Nguyen, H.; Cressie, N.
2011-01-01
NASA has been collecting massive amounts of remote sensing data about Earth's systems for more than a decade. Missions are selected to be complementary in quantities measured, retrieval techniques, and sampling characteristics, so these datasets are highly synergistic. To fully exploit this, a rigorous methodology for combining data with heterogeneous sampling characteristics is required. For scientific purposes, the methodology must also provide quantitative measures of uncertainty that propagate input-data uncertainty appropriately. We view this as a statistical inference problem. The true but notdirectly- observed quantities form a vector-valued field continuous in space and time. Our goal is to infer those true values or some function of them, and provide to uncertainty quantification for those inferences. We use a spatiotemporal statistical model that relates the unobserved quantities of interest at point-level to the spatially aggregated, observed data. We describe and illustrate our method using CO2 data from two NASA data sets.
When size matters: attention affects performance by contrast or response gain.
Herrmann, Katrin; Montaser-Kouhsari, Leila; Carrasco, Marisa; Heeger, David J
2010-12-01
Covert attention, the selective processing of visual information in the absence of eye movements, improves behavioral performance. We found that attention, both exogenous (involuntary) and endogenous (voluntary), can affect performance by contrast or response gain changes, depending on the stimulus size and the relative size of the attention field. These two variables were manipulated in a cueing task while stimulus contrast was varied. We observed a change in behavioral performance consonant with a change in contrast gain for small stimuli paired with spatial uncertainty and a change in response gain for large stimuli presented at one location (no uncertainty) and surrounded by irrelevant flanking distracters. A complementary neuroimaging experiment revealed that observers' attention fields were wider with than without spatial uncertainty. Our results support important predictions of the normalization model of attention and reconcile previous, seemingly contradictory findings on the effects of visual attention.
Framework for Uncertainty Assessment - Hanford Site-Wide Groundwater Flow and Transport Modeling
NASA Astrophysics Data System (ADS)
Bergeron, M. P.; Cole, C. R.; Murray, C. J.; Thorne, P. D.; Wurstner, S. K.
2002-05-01
Pacific Northwest National Laboratory is in the process of development and implementation of an uncertainty estimation methodology for use in future site assessments that addresses parameter uncertainty as well as uncertainties related to the groundwater conceptual model. The long-term goals of the effort are development and implementation of an uncertainty estimation methodology for use in future assessments and analyses being made with the Hanford site-wide groundwater model. The basic approach in the framework developed for uncertainty assessment consists of: 1) Alternate conceptual model (ACM) identification to identify and document the major features and assumptions of each conceptual model. The process must also include a periodic review of the existing and proposed new conceptual models as data or understanding become available. 2) ACM development of each identified conceptual model through inverse modeling with historical site data. 3) ACM evaluation to identify which of conceptual models are plausible and should be included in any subsequent uncertainty assessments. 4) ACM uncertainty assessments will only be carried out for those ACMs determined to be plausible through comparison with historical observations and model structure identification measures. The parameter uncertainty assessment process generally involves: a) Model Complexity Optimization - to identify the important or relevant parameters for the uncertainty analysis; b) Characterization of Parameter Uncertainty - to develop the pdfs for the important uncertain parameters including identification of any correlations among parameters; c) Propagation of Uncertainty - to propagate parameter uncertainties (e.g., by first order second moment methods if applicable or by a Monte Carlo approach) through the model to determine the uncertainty in the model predictions of interest. 5)Estimation of combined ACM and scenario uncertainty by a double sum with each component of the inner sum (an individual CCDF) representing parameter uncertainty associated with a particular scenario and ACM and the outer sum enumerating the various plausible ACM and scenario combinations in order to represent the combined estimate of uncertainty (a family of CCDFs). A final important part of the framework includes identification, enumeration, and documentation of all the assumptions, which include those made during conceptual model development, required by the mathematical model, required by the numerical model, made during the spatial and temporal descretization process, needed to assign the statistical model and associated parameters that describe the uncertainty in the relevant input parameters, and finally those assumptions required by the propagation method. Pacific Northwest National Laboratory is operated for the U.S. Department of Energy under Contract DE-AC06-76RL01830.
Sensitivity-Uncertainty Techniques for Nuclear Criticality Safety
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise
2017-08-07
The sensitivity and uncertainty analysis course will introduce students to k eff sensitivity data, cross-section uncertainty data, how k eff sensitivity data and k eff uncertainty data are generated and how they can be used. Discussion will include how sensitivity/uncertainty data can be used to select applicable critical experiments, to quantify a defensible margin to cover validation gaps and weaknesses, and in development of upper subcritical limits.
Quantifying model uncertainty in seasonal Arctic sea-ice forecasts
NASA Astrophysics Data System (ADS)
Blanchard-Wrigglesworth, Edward; Barthélemy, Antoine; Chevallier, Matthieu; Cullather, Richard; Fučkar, Neven; Massonnet, François; Posey, Pamela; Wang, Wanqiu; Zhang, Jinlun; Ardilouze, Constantin; Bitz, Cecilia; Vernieres, Guillaume; Wallcraft, Alan; Wang, Muyin
2017-04-01
Dynamical model forecasts in the Sea Ice Outlook (SIO) of September Arctic sea-ice extent over the last decade have shown lower skill than that found in both idealized model experiments and hindcasts of previous decades. Additionally, it is unclear how different model physics, initial conditions or post-processing techniques contribute to SIO forecast uncertainty. In this work, we have produced a seasonal forecast of 2015 Arctic summer sea ice using SIO dynamical models initialized with identical sea-ice thickness in the central Arctic. Our goals are to calculate the relative contribution of model uncertainty and irreducible error growth to forecast uncertainty and assess the importance of post-processing, and to contrast pan-Arctic forecast uncertainty with regional forecast uncertainty. We find that prior to forecast post-processing, model uncertainty is the main contributor to forecast uncertainty, whereas after forecast post-processing forecast uncertainty is reduced overall, model uncertainty is reduced by an order of magnitude, and irreducible error growth becomes the main contributor to forecast uncertainty. While all models generally agree in their post-processed forecasts of September sea-ice volume and extent, this is not the case for sea-ice concentration. Additionally, forecast uncertainty of sea-ice thickness grows at a much higher rate along Arctic coastlines relative to the central Arctic ocean. Potential ways of offering spatial forecast information based on the timescale over which the forecast signal beats the noise are also explored.
A structured analysis of uncertainty surrounding modeled impacts of groundwater-extraction rules
NASA Astrophysics Data System (ADS)
Guillaume, Joseph H. A.; Qureshi, M. Ejaz; Jakeman, Anthony J.
2012-08-01
Integrating economic and groundwater models for groundwater-management can help improve understanding of trade-offs involved between conflicting socioeconomic and biophysical objectives. However, there is significant uncertainty in most strategic decision-making situations, including in the models constructed to represent them. If not addressed, this uncertainty may be used to challenge the legitimacy of the models and decisions made using them. In this context, a preliminary uncertainty analysis was conducted of a dynamic coupled economic-groundwater model aimed at assessing groundwater extraction rules. The analysis demonstrates how a variety of uncertainties in such a model can be addressed. A number of methods are used including propagation of scenarios and bounds on parameters, multiple models, block bootstrap time-series sampling and robust linear regression for model calibration. These methods are described within the context of a theoretical uncertainty management framework, using a set of fundamental uncertainty management tasks and an uncertainty typology.
Quantifying uncertainty in carbon and nutrient pools of coarse woody debris
NASA Astrophysics Data System (ADS)
See, C. R.; Campbell, J. L.; Fraver, S.; Domke, G. M.; Harmon, M. E.; Knoepp, J. D.; Woodall, C. W.
2016-12-01
Woody detritus constitutes a major pool of both carbon and nutrients in forested ecosystems. Estimating coarse wood stocks relies on many assumptions, even when full surveys are conducted. Researchers rarely report error in coarse wood pool estimates, despite the importance to ecosystem budgets and modelling efforts. To date, no study has attempted a comprehensive assessment of error rates and uncertainty inherent in the estimation of this pool. Here, we use Monte Carlo analysis to propagate the error associated with the major sources of uncertainty present in the calculation of coarse wood carbon and nutrient (i.e., N, P, K, Ca, Mg, Na) pools. We also evaluate individual sources of error to identify the importance of each source of uncertainty in our estimates. We quantify sampling error by comparing the three most common field methods used to survey coarse wood (two transect methods and a whole-plot survey). We quantify the measurement error associated with length and diameter measurement, and technician error in species identification and decay class using plots surveyed by multiple technicians. We use previously published values of model error for the four most common methods of volume estimation: Smalian's, conical frustum, conic paraboloid, and average-of-ends. We also use previously published values for error in the collapse ratio (cross-sectional height/width) of decayed logs that serves as a surrogate for the volume remaining. We consider sampling error in chemical concentration and density for all decay classes, using distributions from both published and unpublished studies. Analytical uncertainty is calculated using standard reference plant material from the National Institute of Standards. Our results suggest that technician error in decay classification can have a large effect on uncertainty, since many of the error distributions included in the calculation (e.g. density, chemical concentration, volume-model selection, collapse ratio) are decay-class specific.
Jones-Farrand, D. Todd; Fearer, Todd M.; Thogmartin, Wayne E.; Thompson, Frank R.; Nelson, Mark D.; Tirpak, John M.
2011-01-01
Selection of a modeling approach is an important step in the conservation planning process, but little guidance is available. We compared two statistical and three theoretical habitat modeling approaches representing those currently being used for avian conservation planning at landscape and regional scales: hierarchical spatial count (HSC), classification and regression tree (CRT), habitat suitability index (HSI), forest structure database (FS), and habitat association database (HA). We focused our comparison on models for five priority forest-breeding species in the Central Hardwoods Bird Conservation Region: Acadian Flycatcher, Cerulean Warbler, Prairie Warbler, Red-headed Woodpecker, and Worm-eating Warbler. Lacking complete knowledge on the distribution and abundance of each species with which we could illuminate differences between approaches and provide strong grounds for recommending one approach over another, we used two approaches to compare models: rank correlations among model outputs and comparison of spatial correspondence. In general, rank correlations were significantly positive among models for each species, indicating general agreement among the models. Worm-eating Warblers had the highest pairwise correlations, all of which were significant (P , 0.05). Red-headed Woodpeckers had the lowest agreement among models, suggesting greater uncertainty in the relative conservation value of areas within the region. We assessed model uncertainty by mapping the spatial congruence in priorities (i.e., top ranks) resulting from each model for each species and calculating the coefficient of variation across model ranks for each location. This allowed identification of areas more likely to be good targets of conservation effort for a species, those areas that were least likely, and those in between where uncertainty is higher and thus conservation action incorporates more risk. Based on our results, models developed independently for the same purpose (conservation planning for a particular species in a particular geography) yield different answers and thus different conservation strategies. We assert that using only one habitat model (even if validated) as the foundation of a conservation plan is risky. Using multiple models (i.e., ensemble prediction) can reduce uncertainty and increase efficacy of conservation action when models corroborate one another and increase understanding of the system when they do not.
Understanding identifiability as a crucial step in uncertainty assessment
NASA Astrophysics Data System (ADS)
Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.
2016-12-01
The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.
NASA Astrophysics Data System (ADS)
Migliavacca, M.; Sonnentag, O.; Keenan, T. F.; Cescatti, A.; O'Keefe, J.; Richardson, A. D.
2012-01-01
Phenology, the timing of recurring life cycle events, controls numerous land surface feedbacks to the climate systems through the regulation of exchanges of carbon, water and energy between the biosphere and atmosphere. Land surface models, however, are known to have systematic errors in the simulation of spring phenology, which potentially could propagate to uncertainty in modeled responses to future climate change. Here, we analyzed the Harvard Forest phenology record to investigate and characterize the sources of uncertainty in phenological forecasts and the subsequent impacts on model forecasts of carbon and water cycling in the future. Using a model-data fusion approach, we combined information from 20 yr of phenological observations of 11 North American woody species with 12 phenological models of different complexity to predict leaf bud-burst. The evaluation of different phenological models indicated support for spring warming models with photoperiod limitations and, though to a lesser extent, to chilling models based on the alternating model structure. We assessed three different sources of uncertainty in phenological forecasts: parameter uncertainty, model uncertainty, and driver uncertainty. The latter was characterized running the models to 2099 using 2 different IPCC climate scenarios (A1fi vs. B1, i.e. high CO2 emissions vs. low CO2 emissions scenario). Parameter uncertainty was the smallest (average 95% CI: 2.4 day century-1 for scenario B1 and 4.5 day century-1 for A1fi), whereas driver uncertainty was the largest (up to 8.4 day century-1 in the simulated trends). The uncertainty related to model structure is also large and the predicted bud-burst trends as well as the shape of the smoothed projections varied somewhat among models (±7.7 day century-1 for A1fi, ±3.6 day century-1 for B1). The forecast sensitivity of bud-burst to temperature (i.e. days bud-burst advanced per degree of warming) varied between 2.2 day °C-1 and 5.2 day °C-1 depending on model structure. We quantified the impact of uncertainties in bud-burst forecasts on simulated carbon and water fluxes using a process-based terrestrial biosphere model. Uncertainty in phenology model structure led to uncertainty in the description of the seasonality of processes, which accumulated to uncertainty in annual model estimates of gross primary productivity (GPP) and evapotranspiration (ET) of 9.6% and 2.9% respectively. A sensitivity analysis shows that a variation of ±10 days in bud-burst dates led to a variation of ±5.0% for annual GPP and about ±2.0% for ET. For phenology models, differences among future climate scenarios represent the largest source of uncertainty, followed by uncertainties related to model structure, and finally, uncertainties related to model parameterization. The uncertainties we have quantified will affect the description of the seasonality of processes and in particular the simulation of carbon uptake by forest ecosystems, with a larger impact of uncertainties related to phenology model structure, followed by uncertainties related to phenological model parameterization.
Using Latent Class Analysis to Model Temperament Types.
Loken, Eric
2004-10-01
Mixture models are appropriate for data that arise from a set of qualitatively different subpopulations. In this study, latent class analysis was applied to observational data from a laboratory assessment of infant temperament at four months of age. The EM algorithm was used to fit the models, and the Bayesian method of posterior predictive checks was used for model selection. Results show at least three types of infant temperament, with patterns consistent with those identified by previous researchers who classified the infants using a theoretically based system. Multiple imputation of group memberships is proposed as an alternative to assigning subjects to the latent class with maximum posterior probability in order to reflect variance due to uncertainty in the parameter estimation. Latent class membership at four months of age predicted longitudinal outcomes at four years of age. The example illustrates issues relevant to all mixture models, including estimation, multi-modality, model selection, and comparisons based on the latent group indicators.
Multi-model inference for incorporating trophic and climate uncertainty into stock assessments
NASA Astrophysics Data System (ADS)
Ianelli, James; Holsman, Kirstin K.; Punt, André E.; Aydin, Kerim
2016-12-01
Ecosystem-based fisheries management (EBFM) approaches allow a broader and more extensive consideration of objectives than is typically possible with conventional single-species approaches. Ecosystem linkages may include trophic interactions and climate change effects on productivity for the relevant species within the system. Presently, models are evolving to include a comprehensive set of fishery and ecosystem information to address these broader management considerations. The increased scope of EBFM approaches is accompanied with a greater number of plausible models to describe the systems. This can lead to harvest recommendations and biological reference points that differ considerably among models. Model selection for projections (and specific catch recommendations) often occurs through a process that tends to adopt familiar, often simpler, models without considering those that incorporate more complex ecosystem information. Multi-model inference provides a framework that resolves this dilemma by providing a means of including information from alternative, often divergent models to inform biological reference points and possible catch consequences. We apply an example of this approach to data for three species of groundfish in the Bering Sea: walleye pollock, Pacific cod, and arrowtooth flounder using three models: 1) an age-structured "conventional" single-species model, 2) an age-structured single-species model with temperature-specific weight at age, and 3) a temperature-specific multi-species stock assessment model. The latter two approaches also include consideration of alternative future climate scenarios, adding another dimension to evaluate model projection uncertainty. We show how Bayesian model-averaging methods can be used to incorporate such trophic and climate information to broaden single-species stock assessments by using an EBFM approach that may better characterize uncertainty.
Comparing Families of Dynamic Causal Models
Penny, Will D.; Stephan, Klaas E.; Daunizeau, Jean; Rosa, Maria J.; Friston, Karl J.; Schofield, Thomas M.; Leff, Alex P.
2010-01-01
Mathematical models of scientific data can be formally compared using Bayesian model evidence. Previous applications in the biological sciences have mainly focussed on model selection in which one first selects the model with the highest evidence and then makes inferences based on the parameters of that model. This “best model” approach is very useful but can become brittle if there are a large number of models to compare, and if different subjects use different models. To overcome this shortcoming we propose the combination of two further approaches: (i) family level inference and (ii) Bayesian model averaging within families. Family level inference removes uncertainty about aspects of model structure other than the characteristic of interest. For example: What are the inputs to the system? Is processing serial or parallel? Is it linear or nonlinear? Is it mediated by a single, crucial connection? We apply Bayesian model averaging within families to provide inferences about parameters that are independent of further assumptions about model structure. We illustrate the methods using Dynamic Causal Models of brain imaging data. PMID:20300649
Determination of Uncertainties for the New SSME Model
NASA Technical Reports Server (NTRS)
Coleman, Hugh W.; Hawk, Clark W.
1996-01-01
This report discusses the uncertainty analysis performed in support of a new test analysis and performance prediction model for the Space Shuttle Main Engine. The new model utilizes uncertainty estimates for experimental data and for the analytical model to obtain the most plausible operating condition for the engine system. This report discusses the development of the data sets and uncertainty estimates to be used in the development of the new model. It also presents the application of uncertainty analysis to analytical models and the uncertainty analysis for the conservation of mass and energy balance relations is presented. A new methodology for the assessment of the uncertainty associated with linear regressions is presented.
NASA Astrophysics Data System (ADS)
Langer, P.; Sepahvand, K.; Guist, C.; Bär, J.; Peplow, A.; Marburg, S.
2018-03-01
The simulation model which examines the dynamic behavior of real structures needs to address the impact of uncertainty in both geometry and material parameters. This article investigates three-dimensional finite element models for structural dynamics problems with respect to both model and parameter uncertainties. The parameter uncertainties are determined via laboratory measurements on several beam-like samples. The parameters are then considered as random variables to the finite element model for exploring the uncertainty effects on the quality of the model outputs, i.e. natural frequencies. The accuracy of the output predictions from the model is compared with the experimental results. To this end, the non-contact experimental modal analysis is conducted to identify the natural frequency of the samples. The results show a good agreement compared with experimental data. Furthermore, it is demonstrated that geometrical uncertainties have more influence on the natural frequencies compared to material parameters and material uncertainties are about two times higher than geometrical uncertainties. This gives valuable insights for improving the finite element model due to various parameter ranges required in a modeling process involving uncertainty.
Operational hydrological forecasting in Bavaria. Part II: Ensemble forecasting
NASA Astrophysics Data System (ADS)
Ehret, U.; Vogelbacher, A.; Moritz, K.; Laurent, S.; Meyer, I.; Haag, I.
2009-04-01
In part I of this study, the operational flood forecasting system in Bavaria and an approach to identify and quantify forecast uncertainty was introduced. The approach is split into the calculation of an empirical 'overall error' from archived forecasts and the calculation of an empirical 'model error' based on hydrometeorological forecast tests, where rainfall observations were used instead of forecasts. The 'model error' can especially in upstream catchments where forecast uncertainty is strongly dependent on the current predictability of the atrmosphere be superimposed on the spread of a hydrometeorological ensemble forecast. In Bavaria, two meteorological ensemble prediction systems are currently tested for operational use: the 16-member COSMO-LEPS forecast and a poor man's ensemble composed of DWD GME, DWD Cosmo-EU, NCEP GFS, Aladin-Austria, MeteoSwiss Cosmo-7. The determination of the overall forecast uncertainty is dependent on the catchment characteristics: 1. Upstream catchment with high influence of weather forecast a) A hydrological ensemble forecast is calculated using each of the meteorological forecast members as forcing. b) Corresponding to the characteristics of the meteorological ensemble forecast, each resulting forecast hydrograph can be regarded as equally likely. c) The 'model error' distribution, with parameters dependent on hydrological case and lead time, is added to each forecast timestep of each ensemble member d) For each forecast timestep, the overall (i.e. over all 'model error' distribution of each ensemble member) error distribution is calculated e) From this distribution, the uncertainty range on a desired level (here: the 10% and 90% percentile) is extracted and drawn as forecast envelope. f) As the mean or median of an ensemble forecast does not necessarily exhibit meteorologically sound temporal evolution, a single hydrological forecast termed 'lead forecast' is chosen and shown in addition to the uncertainty bounds. This can be either an intermediate forecast between the extremes of the ensemble spread or a manually selected forecast based on a meteorologists advice. 2. Downstream catchments with low influence of weather forecast In downstream catchments with strong human impact on discharge (e.g. by reservoir operation) and large influence of upstream gauge observation quality on forecast quality, the 'overall error' may in most cases be larger than the combination of the 'model error' and an ensemble spread. Therefore, the overall forecast uncertainty bounds are calculated differently: a) A hydrological ensemble forecast is calculated using each of the meteorological forecast members as forcing. Here, additionally the corresponding inflow hydrograph from all upstream catchments must be used. b) As for an upstream catchment, the uncertainty range is determined by combination of 'model error' and the ensemble member forecasts c) In addition, the 'overall error' is superimposed on the 'lead forecast'. For reasons of consistency, the lead forecast must be based on the same meteorological forecast in the downstream and all upstream catchments. d) From the resulting two uncertainty ranges (one from the ensemble forecast and 'model error', one from the 'lead forecast' and 'overall error'), the envelope is taken as the most prudent uncertainty range. In sum, the uncertainty associated with each forecast run is calculated and communicated to the public in the form of 10% and 90% percentiles. As in part I of this study, the methodology as well as the useful- or uselessness of the resulting uncertainty ranges will be presented and discussed by typical examples.
Robust planning of dynamic wireless charging infrastructure for battery electric buses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zhaocai; Song, Ziqi
Battery electric buses with zero tailpipe emissions have great potential in improving environmental sustainability and livability of urban areas. However, the problems of high cost and limited range associated with on-board batteries have substantially limited the popularity of battery electric buses. The technology of dynamic wireless power transfer (DWPT), which provides bus operators with the ability to charge buses while in motion, may be able to effectively alleviate the drawbacks of electric buses. In this paper, we address the problem of simultaneously selecting the optimal location of the DWPT facilities and designing the optimal battery sizes of electric buses formore » a DWPT electric bus system. The problem is first constructed as a deterministic model in which the uncertainty of energy consumption and travel time of electric buses is neglected. The methodology of robust optimization (RO) is then adopted to address the uncertainty of energy consumption and travel time. The affinely adjustable robust counterpart (AARC) of the deterministic model is developed, and its equivalent tractable mathematical programming is derived. Both the deterministic model and the robust model are demonstrated with a real-world bus system. The results of our study demonstrate that the proposed deterministic model can effectively determine the allocation of DWPT facilities and the battery sizes of electric buses for a DWPT electric bus system; and the robust model can further provide optimal designs that are robust against the uncertainty of energy consumption and travel time for electric buses.« less
Robust planning of dynamic wireless charging infrastructure for battery electric buses
Liu, Zhaocai; Song, Ziqi
2017-10-01
Battery electric buses with zero tailpipe emissions have great potential in improving environmental sustainability and livability of urban areas. However, the problems of high cost and limited range associated with on-board batteries have substantially limited the popularity of battery electric buses. The technology of dynamic wireless power transfer (DWPT), which provides bus operators with the ability to charge buses while in motion, may be able to effectively alleviate the drawbacks of electric buses. In this paper, we address the problem of simultaneously selecting the optimal location of the DWPT facilities and designing the optimal battery sizes of electric buses formore » a DWPT electric bus system. The problem is first constructed as a deterministic model in which the uncertainty of energy consumption and travel time of electric buses is neglected. The methodology of robust optimization (RO) is then adopted to address the uncertainty of energy consumption and travel time. The affinely adjustable robust counterpart (AARC) of the deterministic model is developed, and its equivalent tractable mathematical programming is derived. Both the deterministic model and the robust model are demonstrated with a real-world bus system. The results of our study demonstrate that the proposed deterministic model can effectively determine the allocation of DWPT facilities and the battery sizes of electric buses for a DWPT electric bus system; and the robust model can further provide optimal designs that are robust against the uncertainty of energy consumption and travel time for electric buses.« less
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests.more » The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.« less
Measurement uncertainty evaluation of conicity error inspected on CMM
NASA Astrophysics Data System (ADS)
Wang, Dongxia; Song, Aiguo; Wen, Xiulan; Xu, Youxiong; Qiao, Guifang
2016-01-01
The cone is widely used in mechanical design for rotation, centering and fixing. Whether the conicity error can be measured and evaluated accurately will directly influence its assembly accuracy and working performance. According to the new generation geometrical product specification(GPS), the error and its measurement uncertainty should be evaluated together. The mathematical model of the minimum zone conicity error is established and an improved immune evolutionary algorithm(IIEA) is proposed to search for the conicity error. In the IIEA, initial antibodies are firstly generated by using quasi-random sequences and two kinds of affinities are calculated. Then, each antibody clone is generated and they are self-adaptively mutated so as to maintain diversity. Similar antibody is suppressed and new random antibody is generated. Because the mathematical model of conicity error is strongly nonlinear and the input quantities are not independent, it is difficult to use Guide to the expression of uncertainty in the measurement(GUM) method to evaluate measurement uncertainty. Adaptive Monte Carlo method(AMCM) is proposed to estimate measurement uncertainty in which the number of Monte Carlo trials is selected adaptively and the quality of the numerical results is directly controlled. The cone parts was machined on lathe CK6140 and measured on Miracle NC 454 Coordinate Measuring Machine(CMM). The experiment results confirm that the proposed method not only can search for the approximate solution of the minimum zone conicity error(MZCE) rapidly and precisely, but also can evaluate measurement uncertainty and give control variables with an expected numerical tolerance. The conicity errors computed by the proposed method are 20%-40% less than those computed by NC454 CMM software and the evaluation accuracy improves significantly.
Integrated assessment of future land use in Brazil under increasing demand for bioenergy
NASA Astrophysics Data System (ADS)
Verstegen, Judith; van der Hilst, Floor; Karssenberg, Derek; Faaij, André
2014-05-01
Environmental impacts of a future increase in demand for bioenergy depend on the magnitude, location and pattern of the direct and indirect land use change of energy cropland expansion. Here we aim at 1) projecting the spatiotemporal pattern of sugar cane expansion and the effect on other land uses in Brazil towards 2030, and 2) assessing the uncertainty herein. For the spatio-temporal projection, four model components are used: 1) an initial land use map that shows the initial amount and location of sugar cane and all other relevant land use classes in the system, 2) an economic model to project the quantity of change of all land uses, 3) a spatially explicit land use model that determines the location of change of all land uses, and 4) various analysis to determine the impacts of these changes on water, socio-economics, and biodiversity. All four model components are sources of uncertainty, which is quantified by defining error models for all components and their inputs and propagating these errors through the chain of components. No recent accurate land use map is available for Brazil, so municipal census data and the global land cover map GlobCover are combined to create the initial land use map. The census data are disaggregated stochastically using GlobCover as a probability surface, to obtain a stochastic land use raster map for 2006. Since bioenergy is a global market, the quantity of change in sugar cane in Brazil depends on dynamics in both Brazil itself and other parts of the world. Therefore, a computable general equilibrium (CGE) model, MAGNET, is run to produce a time series of the relative change of all land uses given an increased future demand for bioenergy. A sensitivity analysis finds the upper and lower boundaries hereof, to define this component's error model. An initial selection of drivers of location for each land use class is extracted from literature. Using a Bayesian data assimilation technique and census data from 2007 to 2012 as observational data, the model is identified, meaning that the final selection and optimal relative importance of the drivers of location are determined. The data assimilation technique takes into account uncertainty in the observational data and yields a stochastic representation of the identified model. Using all stochastic inputs, this land use change model is run to find at which locations the future land use changes occur and to quantify the associated uncertainty. The results indicate that in the initial land use map especially the shape of sugar cane and other land use patches are uncertain, not so much the location. From the economic model we can derive that dynamics in the livestock sector play a major role in the land use development of Brazil, the effect of this uncertainty on the model output is large. If the intensity of the livestock sector is not increased future projections show a large loss of natural vegetation. Impacts on water are not that large, except when irrigation is applied on the expanded cropland.
Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling
NASA Astrophysics Data System (ADS)
Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.
2017-12-01
Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model. This complex model then serves as the basis to compare simpler model structures. Through this approach, predictive uncertainty can be quantified relative to a known reference solution.
Two-agent cooperative search using game models with endurance-time constraints
NASA Astrophysics Data System (ADS)
Sujit, P. B.; Ghose, Debasish
2010-07-01
In this article, the problem of two Unmanned Aerial Vehicles (UAVs) cooperatively searching an unknown region is addressed. The search region is discretized into hexagonal cells and each cell is assumed to possess an uncertainty value. The UAVs have to cooperatively search these cells taking limited endurance, sensor and communication range constraints into account. Due to limited endurance, the UAVs need to return to the base station for refuelling and also need to select a base station when multiple base stations are present. This article proposes a route planning algorithm that takes endurance time constraints into account and uses game theoretical strategies to reduce the uncertainty. The route planning algorithm selects only those cells that ensure the agent will return to any one of the available bases. A set of paths are formed using these cells which the game theoretical strategies use to select a path that yields maximum uncertainty reduction. We explore non-cooperative Nash, cooperative and security strategies from game theory to enhance the search effectiveness. Monte-Carlo simulations are carried out which show the superiority of the game theoretical strategies over greedy strategy for different look ahead step length paths. Within the game theoretical strategies, non-cooperative Nash and cooperative strategy perform similarly in an ideal case, but Nash strategy performs better than the cooperative strategy when the perceived information is different. We also propose a heuristic based on partitioning of the search space into sectors to reduce computational overhead without performance degradation.
Evaluation of GCMs in the context of regional predictive climate impact studies.
NASA Astrophysics Data System (ADS)
Kokorev, Vasily; Anisimov, Oleg
2016-04-01
Significant improvements in the structure, complexity, and general performance of earth system models (ESMs) have been made in the recent decade. Despite these efforts, the range of uncertainty in predicting regional climate impacts remains large. The problem is two-fold. Firstly, there is an intrinsic conflict between the local and regional scales of climate impacts and adaptation strategies, on one hand, and larger scales, at which ESMs demonstrate better performance, on the other. Secondly, there is a growing understanding that majority of the impacts involve thresholds, and are thus driven by extreme climate events, whereas accent in climate projections is conventionally made on gradual changes in means. In this study we assess the uncertainty in projecting extreme climatic events within a region-specific and process-oriented context by examining the skills and ranking of ESMs. We developed a synthetic regionalization of Northern Eurasia that accounts for the spatial features of modern climatic changes and major environmental and socio-economical impacts. Elements of such fragmentation could be considered as natural focus regions that bridge the gap between the spatial scales adopted in climate-impacts studies and patterns of climate change simulated by ESMs. In each focus region we selected several target meteorological variables that govern the key regional impacts, and examined the ability of the models to replicate their seasonal and annual means and trends by testing them against observations. We performed a similar evaluation with regard to extremes and statistics of the target variables. And lastly, we used the results of these analyses to select sets of models that demonstrate the best performance at selected focus regions with regard to selected sets of target meteorological parameters. Ultimately, we ranked the models according to their skills, identified top-end models that "better than average" reproduce the behavior of climatic parameters, and eliminated the outliers. Since the criteria of selecting the "best" models are somewhat loose, we constructed several regional ensembles consisting of different number of high-ranked models and compared results from these optimized ensembles with observations and with the ensemble of all models. We tested our approach in specific regional application of the terrestrial Russian Arctic, considering permafrost and Artic biomes as key regional climate-dependent systems, and temperature and precipitation characteristics governing their state as target meteorological parameters. Results of this case study are deposited on the web portal www.permafrost.su/gcms
NASA Astrophysics Data System (ADS)
Kumar, V.; Nayagum, D.; Thornton, S.; Banwart, S.; Schuhmacher2, M.; Lerner, D.
2006-12-01
Characterization of uncertainty associated with groundwater quality models is often of critical importance, as for example in cases where environmental models are employed in risk assessment. Insufficient data, inherent variability and estimation errors of environmental model parameters introduce uncertainty into model predictions. However, uncertainty analysis using conventional methods such as standard Monte Carlo sampling (MCS) may not be efficient, or even suitable, for complex, computationally demanding models and involving different nature of parametric variability and uncertainty. General MCS or variant of MCS such as Latin Hypercube Sampling (LHS) assumes variability and uncertainty as a single random entity and the generated samples are treated as crisp assuming vagueness as randomness. Also when the models are used as purely predictive tools, uncertainty and variability lead to the need for assessment of the plausible range of model outputs. An improved systematic variability and uncertainty analysis can provide insight into the level of confidence in model estimates, and can aid in assessing how various possible model estimates should be weighed. The present study aims to introduce, Fuzzy Latin Hypercube Sampling (FLHS), a hybrid approach of incorporating cognitive and noncognitive uncertainties. The noncognitive uncertainty such as physical randomness, statistical uncertainty due to limited information, etc can be described by its own probability density function (PDF); whereas the cognitive uncertainty such estimation error etc can be described by the membership function for its fuzziness and confidence interval by ?-cuts. An important property of this theory is its ability to merge inexact generated data of LHS approach to increase the quality of information. The FLHS technique ensures that the entire range of each variable is sampled with proper incorporation of uncertainty and variability. A fuzzified statistical summary of the model results will produce indices of sensitivity and uncertainty that relate the effects of heterogeneity and uncertainty of input variables to model predictions. The feasibility of the method is validated to assess uncertainty propagation of parameter values for estimation of the contamination level of a drinking water supply well due to transport of dissolved phenolics from a contaminated site in the UK.
Modeling uncertainty: quicksand for water temperature modeling
Bartholow, John M.
2003-01-01
Uncertainty has been a hot topic relative to science generally, and modeling specifically. Modeling uncertainty comes in various forms: measured data, limited model domain, model parameter estimation, model structure, sensitivity to inputs, modelers themselves, and users of the results. This paper will address important components of uncertainty in modeling water temperatures, and discuss several areas that need attention as the modeling community grapples with how to incorporate uncertainty into modeling without getting stuck in the quicksand that prevents constructive contributions to policy making. The material, and in particular the reference, are meant to supplement the presentation given at this conference.
The Contribution of Human Factors in Military System Development: Methodological Considerations
1980-07-01
Risk/Uncertainty Analysis - Project Scoring - Utility Scales - Relevance Tree Techniques (Reverse Factor Analysis) 2. Computer Simulation Simulation...effectiveness of mathematical models for R&D project selection. Management Science, April 1973, 18. 6-43 .1~ *.-. Souder, W.E. h scoring methodology for...per some interval PROFICIENCY test scores (written) RADIATION radiation effects aircrew performance on radiation environments REACTION TIME 1) (time
SPOTting model parameters using a ready-made Python package
NASA Astrophysics Data System (ADS)
Houska, Tobias; Kraft, Philipp; Breuer, Lutz
2015-04-01
The selection and parameterization of reliable process descriptions in ecological modelling is driven by several uncertainties. The procedure is highly dependent on various criteria, like the used algorithm, the likelihood function selected and the definition of the prior parameter distributions. A wide variety of tools have been developed in the past decades to optimize parameters. Some of the tools are closed source. Due to this, the choice for a specific parameter estimation method is sometimes more dependent on its availability than the performance. A toolbox with a large set of methods can support users in deciding about the most suitable method. Further, it enables to test and compare different methods. We developed the SPOT (Statistical Parameter Optimization Tool), an open source python package containing a comprehensive set of modules, to analyze and optimize parameters of (environmental) models. SPOT comes along with a selected set of algorithms for parameter optimization and uncertainty analyses (Monte Carlo, MC; Latin Hypercube Sampling, LHS; Maximum Likelihood, MLE; Markov Chain Monte Carlo, MCMC; Scuffled Complex Evolution, SCE-UA; Differential Evolution Markov Chain, DE-MCZ), together with several likelihood functions (Bias, (log-) Nash-Sutcliff model efficiency, Correlation Coefficient, Coefficient of Determination, Covariance, (Decomposed-, Relative-, Root-) Mean Squared Error, Mean Absolute Error, Agreement Index) and prior distributions (Binomial, Chi-Square, Dirichlet, Exponential, Laplace, (log-, multivariate-) Normal, Pareto, Poisson, Cauchy, Uniform, Weibull) to sample from. The model-independent structure makes it suitable to analyze a wide range of applications. We apply all algorithms of the SPOT package in three different case studies. Firstly, we investigate the response of the Rosenbrock function, where the MLE algorithm shows its strengths. Secondly, we study the Griewank function, which has a challenging response surface for optimization methods. Here we see simple algorithms like the MCMC struggling to find the global optimum of the function, while algorithms like SCE-UA and DE-MCZ show their strengths. Thirdly, we apply an uncertainty analysis of a one-dimensional physically based hydrological model build with the Catchment Modelling Framework (CMF). The model is driven by meteorological and groundwater data from a Free Air Carbon Enrichment (FACE) experiment in Linden (Hesse, Germany). Simulation results are evaluated with measured soil moisture data. We search for optimal parameter sets of the van Genuchten-Mualem function and find different equally optimal solutions with some of the algorithms. The case studies reveal that the implemented SPOT methods work sufficiently well. They further show the benefit of having one tool at hand that includes a number of parameter search methods, likelihood functions and a priori parameter distributions within one platform independent package.
Benefit-cost estimation for alternative drinking water maximum contaminant levels
NASA Astrophysics Data System (ADS)
Gurian, Patrick L.; Small, Mitchell J.; Lockwood, John R.; Schervish, Mark J.
2001-08-01
A simulation model for estimating compliance behavior and resulting costs at U.S. Community Water Suppliers is developed and applied to the evaluation of a more stringent maximum contaminant level (MCL) for arsenic. Probability distributions of source water arsenic concentrations are simulated using a statistical model conditioned on system location (state) and source water type (surface water or groundwater). This model is fit to two recent national surveys of source waters, then applied with the model explanatory variables for the population of U.S. Community Water Suppliers. Existing treatment types and arsenic removal efficiencies are also simulated. Utilities with finished water arsenic concentrations above the proposed MCL are assumed to select the least cost option compatible with their existing treatment from among 21 available compliance strategies and processes for meeting the standard. Estimated costs and arsenic exposure reductions at individual suppliers are aggregated to estimate the national compliance cost, arsenic exposure reduction, and resulting bladder cancer risk reduction. Uncertainties in the estimates are characterized based on uncertainties in the occurrence model parameters, existing treatment types, treatment removal efficiencies, costs, and the bladder cancer dose-response function for arsenic.
NASA Astrophysics Data System (ADS)
Friedel, Michael; Buscema, Massimo
2016-04-01
Aquatic ecosystem models can potentially be used to understand the influence of stresses on catchment resource quality. Given that catchment responses are functions of natural and anthropogenic stresses reflected in sparse and spatiotemporal biological, physical, and chemical measurements, an ecosystem is difficult to model using statistical or numerical methods. We propose an artificial adaptive systems approach to model ecosystems. First, an unsupervised machine-learning (ML) network is trained using the set of available sparse and disparate data variables. Second, an evolutionary algorithm with genetic doping is applied to reduce the number of ecosystem variables to an optimal set. Third, the optimal set of ecosystem variables is used to retrain the ML network. Fourth, a stochastic cross-validation approach is applied to quantify and compare the nonlinear uncertainty in selected predictions of the original and reduced models. Results are presented for aquatic ecosystems (tens of thousands of square kilometers) undergoing landscape change in the USA: Upper Illinois River Basin and Central Colorado Assessment Project Area, and Southland region, NZ.
PockDrug: A Model for Predicting Pocket Druggability That Overcomes Pocket Estimation Uncertainties.
Borrel, Alexandre; Regad, Leslie; Xhaard, Henri; Petitjean, Michel; Camproux, Anne-Claude
2015-04-27
Predicting protein druggability is a key interest in the target identification phase of drug discovery. Here, we assess the pocket estimation methods' influence on druggability predictions by comparing statistical models constructed from pockets estimated using different pocket estimation methods: a proximity of either 4 or 5.5 Å to a cocrystallized ligand or DoGSite and fpocket estimation methods. We developed PockDrug, a robust pocket druggability model that copes with uncertainties in pocket boundaries. It is based on a linear discriminant analysis from a pool of 52 descriptors combined with a selection of the most stable and efficient models using different pocket estimation methods. PockDrug retains the best combinations of three pocket properties which impact druggability: geometry, hydrophobicity, and aromaticity. It results in an average accuracy of 87.9% ± 4.7% using a test set and exhibits higher accuracy (∼5-10%) than previous studies that used an identical apo set. In conclusion, this study confirms the influence of pocket estimation on pocket druggability prediction and proposes PockDrug as a new model that overcomes pocket estimation variability.
Probabilistic simulation of the human factor in structural reliability
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1993-01-01
A formal approach is described in an attempt to computationally simulate the probable ranges of uncertainties of the human factor in structural probabilistic assessments. A multi-factor interaction equation (MFIE) model has been adopted for this purpose. Human factors such as marital status, professional status, home life, job satisfaction, work load and health, are considered to demonstrate the concept. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Suitability of the MFIE in the subsequently probabilistic sensitivity studies are performed to assess the validity of the whole approach. Results obtained show that the uncertainties for no error range from five to thirty percent for the most optimistic case.
Probabilistic simulation of the human factor in structural reliability
NASA Astrophysics Data System (ADS)
Chamis, Christos C.; Singhal, Surendra N.
1994-09-01
The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).
Probabilistic Simulation of the Human Factor in Structural Reliability
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Singhal, Surendra N.
1994-01-01
The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).
Assessing Uncertainty in Risk Assessment Models (BOSC CSS meeting)
In vitro assays are increasingly being used in risk assessments Uncertainty in assays leads to uncertainty in models used for risk assessments. This poster assesses uncertainty in the ER and AR models.
NASA Astrophysics Data System (ADS)
Blanchard-Wrigglesworth, E.; Barthélemy, A.; Chevallier, M.; Cullather, R.; Fučkar, N.; Massonnet, F.; Posey, P.; Wang, W.; Zhang, J.; Ardilouze, C.; Bitz, C. M.; Vernieres, G.; Wallcraft, A.; Wang, M.
2017-08-01
Dynamical model forecasts in the Sea Ice Outlook (SIO) of September Arctic sea-ice extent over the last decade have shown lower skill than that found in both idealized model experiments and hindcasts of previous decades. Additionally, it is unclear how different model physics, initial conditions or forecast post-processing (bias correction) techniques contribute to SIO forecast uncertainty. In this work, we have produced a seasonal forecast of 2015 Arctic summer sea ice using SIO dynamical models initialized with identical sea-ice thickness in the central Arctic. Our goals are to calculate the relative contribution of model uncertainty and irreducible error growth to forecast uncertainty and assess the importance of post-processing, and to contrast pan-Arctic forecast uncertainty with regional forecast uncertainty. We find that prior to forecast post-processing, model uncertainty is the main contributor to forecast uncertainty, whereas after forecast post-processing forecast uncertainty is reduced overall, model uncertainty is reduced by an order of magnitude, and irreducible error growth becomes the main contributor to forecast uncertainty. While all models generally agree in their post-processed forecasts of September sea-ice volume and extent, this is not the case for sea-ice concentration. Additionally, forecast uncertainty of sea-ice thickness grows at a much higher rate along Arctic coastlines relative to the central Arctic ocean. Potential ways of offering spatial forecast information based on the timescale over which the forecast signal beats the noise are also explored.
NASA Astrophysics Data System (ADS)
Akinci, A.; Pace, B.
2017-12-01
In this study, we discuss the seismic hazard variability of peak ground acceleration (PGA) at 475 years return period in the Southern Apennines of Italy. The uncertainty and parametric sensitivity are presented to quantify the impact of the several fault parameters on ground motion predictions for 10% exceedance in 50-year hazard. A time-independent PSHA model is constructed based on the long-term recurrence behavior of seismogenic faults adopting the characteristic earthquake model for those sources capable of rupturing the entire fault segment with a single maximum magnitude. The fault-based source model uses the dimensions and slip rates of mapped fault to develop magnitude-frequency estimates for characteristic earthquakes. Variability of the selected fault parameter is given with a truncated normal random variable distribution presented by standard deviation about a mean value. A Monte Carlo approach, based on the random balanced sampling by logic tree, is used in order to capture the uncertainty in seismic hazard calculations. For generating both uncertainty and sensitivity maps, we perform 200 simulations for each of the fault parameters. The results are synthesized both in frequency-magnitude distribution of modeled faults as well as the different maps: the overall uncertainty maps provide a confidence interval for the PGA values and the parameter uncertainty maps determine the sensitivity of hazard assessment to variability of every logic tree branch. These branches of logic tree, analyzed through the Monte Carlo approach, are maximum magnitudes, fault length, fault width, fault dip and slip rates. The overall variability of these parameters is determined by varying them simultaneously in the hazard calculations while the sensitivity of each parameter to overall variability is determined varying each of the fault parameters while fixing others. However, in this study we do not investigate the sensitivity of mean hazard results to the consideration of different GMPEs. Distribution of possible seismic hazard results is illustrated by 95% confidence factor map, which indicates the dispersion about mean value, and coefficient of variation map, which shows percent variability. The results of our study clearly illustrate the influence of active fault parameters to probabilistic seismic hazard maps.
The burden of typhoid fever in low- and middle-income countries: A meta-regression approach
Warren, Joshua L.; Crawford, Forrest W.; Weinberger, Daniel M.; Kürüm, Esra; Pak, Gi Deok; Marks, Florian; Pitzer, Virginia E.
2017-01-01
Background Upcoming vaccination efforts against typhoid fever require an assessment of the baseline burden of disease in countries at risk. There are no typhoid incidence data from most low- and middle-income countries (LMICs), so model-based estimates offer insights for decision-makers in the absence of readily available data. Methods We developed a mixed-effects model fit to data from 32 population-based studies of typhoid incidence in 22 locations in 14 countries. We tested the contribution of economic and environmental indices for predicting typhoid incidence using a stochastic search variable selection algorithm. We performed out-of-sample validation to assess the predictive performance of the model. Results We estimated that 17.8 million cases of typhoid fever occur each year in LMICs (95% credible interval: 6.9–48.4 million). Central Africa was predicted to experience the highest incidence of typhoid, followed by select countries in Central, South, and Southeast Asia. Incidence typically peaked in the 2–4 year old age group. Models incorporating widely available economic and environmental indicators were found to describe incidence better than null models. Conclusions Recent estimates of typhoid burden may under-estimate the number of cases and magnitude of uncertainty in typhoid incidence. Our analysis permits prediction of overall as well as age-specific incidence of typhoid fever in LMICs, and incorporates uncertainty around the model structure and estimates of the predictors. Future studies are needed to further validate and refine model predictions and better understand year-to-year variation in cases. PMID:28241011
NASA Astrophysics Data System (ADS)
Seiller, G.; Roy, R.; Anctil, F.
2017-04-01
Uncertainties associated to the evaluation of the impacts of climate change on water resources are broad, from multiple sources, and lead to diagnoses sometimes difficult to interpret. Quantification of these uncertainties is a key element to yield confidence in the analyses and to provide water managers with valuable information. This work specifically evaluates the influence of hydrological modeling calibration metrics on future water resources projections, on thirty-seven watersheds in the Province of Québec, Canada. Twelve lumped hydrologic models, representing a wide range of operational options, are calibrated with three common objective functions derived from the Nash-Sutcliffe efficiency. The hydrologic models are forced with climate simulations corresponding to two RCP, twenty-nine GCM from CMIP5 (Coupled Model Intercomparison Project phase 5) and two post-treatment techniques, leading to future projections in the 2041-2070 period. Results show that the diagnosis of the impacts of climate change on water resources are quite affected by the hydrologic models selection and calibration metrics. Indeed, for the four selected hydrological indicators, dedicated to water management, parameters from the three objective functions can provide different interpretations in terms of absolute and relative changes, as well as projected changes direction and climatic ensemble consensus. The GR4J model and a multimodel approach offer the best modeling options, based on calibration performance and robustness. Overall, these results illustrate the need to provide water managers with detailed information on relative changes analysis, but also absolute change values, especially for hydrological indicators acting as security policy thresholds.
Uncertainty Estimation in Elastic Full Waveform Inversion by Utilising the Hessian Matrix
NASA Astrophysics Data System (ADS)
Hagen, V. S.; Arntsen, B.; Raknes, E. B.
2017-12-01
Elastic Full Waveform Inversion (EFWI) is a computationally intensive iterative method for estimating elastic model parameters. A key element of EFWI is the numerical solution of the elastic wave equation which lies as a foundation to quantify the mismatch between synthetic (modelled) and true (real) measured seismic data. The misfit between the modelled and true receiver data is used to update the parameter model to yield a better fit between the modelled and true receiver signal. A common approach to the EFWI model update problem is to use a conjugate gradient search method. In this approach the resolution and cross-coupling for the estimated parameter update can be found by computing the full Hessian matrix. Resolution of the estimated model parameters depend on the chosen parametrisation, acquisition geometry, and temporal frequency range. Although some understanding has been gained, it is still not clear which elastic parameters can be reliably estimated under which conditions. With few exceptions, previous analyses have been based on arguments using radiation pattern analysis. We use the known adjoint-state technique with an expansion to compute the Hessian acting on a model perturbation to conduct our study. The Hessian is used to infer parameter resolution and cross-coupling for different selections of models, acquisition geometries, and data types, including streamer and ocean bottom seismic recordings. Information about the model uncertainty is obtained from the exact Hessian, and is essential when evaluating the quality of estimated parameters due to the strong influence of source-receiver geometry and frequency content. Investigation is done on both a homogeneous model and the Gullfaks model where we illustrate the influence of offset on parameter resolution and cross-coupling as a way of estimating uncertainty.
NASA Astrophysics Data System (ADS)
Hakim, Layal; Lacaze, Guilhem; Khalil, Mohammad; Sargsyan, Khachik; Najm, Habib; Oefelein, Joseph
2018-05-01
This paper demonstrates the development of a simple chemical kinetics model designed for autoignition of n-dodecane in air using Bayesian inference with a model-error representation. The model error, i.e. intrinsic discrepancy from a high-fidelity benchmark model, is represented by allowing additional variability in selected parameters. Subsequently, we quantify predictive uncertainties in the results of autoignition simulations of homogeneous reactors at realistic diesel engine conditions. We demonstrate that these predictive error bars capture model error as well. The uncertainty propagation is performed using non-intrusive spectral projection that can also be used in principle with larger scale computations, such as large eddy simulation. While the present calibration is performed to match a skeletal mechanism, it can be done with equal success using experimental data only (e.g. shock-tube measurements). Since our method captures the error associated with structural model simplifications, we believe that the optimised model could then lead to better qualified predictions of autoignition delay time in high-fidelity large eddy simulations than the existing detailed mechanisms. This methodology provides a way to reduce the cost of reaction kinetics in simulations systematically, while quantifying the accuracy of predictions of important target quantities.
Application of rrm as behavior mode choice on modelling transportation
NASA Astrophysics Data System (ADS)
Surbakti, M. S.; Sadullah, A. F.
2018-03-01
Transportation mode selection, the first step in transportation planning process, is probably one of the most important planning elements. The development of models that can explain the preference of passengers regarding their chosen mode of public transport option will contribute to the improvement and development of existing public transport. Logit models have been widely used to determine the mode choice models in which the alternative are different transport modes. Random Regret Minimization (RRM) theory is a theory developed from the behavior to choose (choice behavior) in a state of uncertainty. During its development, the theory was used in various disciplines, such as marketing, micro economy, psychology, management, and transportation. This article aims to show the use of RRM in various modes of selection, from the results of various studies that have been conducted both in north sumatera and western Java.
Irreducible Uncertainty in Terrestrial Carbon Projections
NASA Astrophysics Data System (ADS)
Lovenduski, N. S.; Bonan, G. B.
2016-12-01
We quantify and isolate the sources of uncertainty in projections of carbon accumulation by the ocean and terrestrial biosphere over 2006-2100 using output from Earth System Models participating in the 5th Coupled Model Intercomparison Project. We consider three independent sources of uncertainty in our analysis of variance: (1) internal variability, driven by random, internal variations in the climate system, (2) emission scenario, driven by uncertainty in future radiative forcing, and (3) model structure, wherein different models produce different projections given the same emission scenario. Whereas uncertainty in projections of ocean carbon accumulation by 2100 is 100 Pg C and driven primarily by emission scenario, uncertainty in projections of terrestrial carbon accumulation by 2100 is 50% larger than that of the ocean, and driven primarily by model structure. This structural uncertainty is correlated with emission scenario: the variance associated with model structure is an order of magnitude larger under a business-as-usual scenario (RCP8.5) than a mitigation scenario (RCP2.6). In an effort to reduce this structural uncertainty, we apply various model weighting schemes to our analysis of variance in terrestrial carbon accumulation projections. The largest reductions in uncertainty are achieved when giving all the weight to a single model; here the uncertainty is of a similar magnitude to the ocean projections. Such an analysis suggests that this structural uncertainty is irreducible given current terrestrial model development efforts.
Uncertainty of climate change impact on groundwater reserves - Application to a chalk aquifer
NASA Astrophysics Data System (ADS)
Goderniaux, Pascal; Brouyère, Serge; Wildemeersch, Samuel; Therrien, René; Dassargues, Alain
2015-09-01
Recent studies have evaluated the impact of climate change on groundwater resources for different geographical and climatic contexts. However, most studies have either not estimated the uncertainty around projected impacts or have limited the analysis to the uncertainty related to climate models. In this study, the uncertainties around impact projections from several sources (climate models, natural variability of the weather, hydrological model calibration) are calculated and compared for the Geer catchment (465 km2) in Belgium. We use a surface-subsurface integrated model implemented using the finite element code HydroGeoSphere, coupled with climate change scenarios (2010-2085) and the UCODE_2005 inverse model, to assess the uncertainty related to the calibration of the hydrological model. This integrated model provides a more realistic representation of the water exchanges between surface and subsurface domains and constrains more the calibration with the use of both surface and subsurface observed data. Sensitivity and uncertainty analyses were performed on predictions. The linear uncertainty analysis is approximate for this nonlinear system, but it provides some measure of uncertainty for computationally demanding models. Results show that, for the Geer catchment, the most important uncertainty is related to calibration of the hydrological model. The total uncertainty associated with the prediction of groundwater levels remains large. By the end of the century, however, the uncertainty becomes smaller than the predicted decline in groundwater levels.
Walsh, Daniel P.; Norton, Andrew S.; Storm, Daniel J.; Van Deelen, Timothy R.; Heisy, Dennis M.
2018-01-01
Implicit and explicit use of expert knowledge to inform ecological analyses is becoming increasingly common because it often represents the sole source of information in many circumstances. Thus, there is a need to develop statistical methods that explicitly incorporate expert knowledge, and can successfully leverage this information while properly accounting for associated uncertainty during analysis. Studies of cause-specific mortality provide an example of implicit use of expert knowledge when causes-of-death are uncertain and assigned based on the observer's knowledge of the most likely cause. To explicitly incorporate this use of expert knowledge and the associated uncertainty, we developed a statistical model for estimating cause-specific mortality using a data augmentation approach within a Bayesian hierarchical framework. Specifically, for each mortality event, we elicited the observer's belief of cause-of-death by having them specify the probability that the death was due to each potential cause. These probabilities were then used as prior predictive values within our framework. This hierarchical framework permitted a simple and rigorous estimation method that was easily modified to include covariate effects and regularizing terms. Although applied to survival analysis, this method can be extended to any event-time analysis with multiple event types, for which there is uncertainty regarding the true outcome. We conducted simulations to determine how our framework compared to traditional approaches that use expert knowledge implicitly and assume that cause-of-death is specified accurately. Simulation results supported the inclusion of observer uncertainty in cause-of-death assignment in modeling of cause-specific mortality to improve model performance and inference. Finally, we applied the statistical model we developed and a traditional method to cause-specific survival data for white-tailed deer, and compared results. We demonstrate that model selection results changed between the two approaches, and incorporating observer knowledge in cause-of-death increased the variability associated with parameter estimates when compared to the traditional approach. These differences between the two approaches can impact reported results, and therefore, it is critical to explicitly incorporate expert knowledge in statistical methods to ensure rigorous inference.
NASA Astrophysics Data System (ADS)
Pianosi, Francesca; Lal Shrestha, Durga; Solomatine, Dimitri
2010-05-01
This research presents an extension of UNEEC (Uncertainty Estimation based on Local Errors and Clustering, Shrestha and Solomatine, 2006, 2008 & Solomatine and Shrestha, 2009) method in the direction of explicit inclusion of parameter uncertainty. UNEEC method assumes that there is an optimal model and the residuals of the model can be used to assess the uncertainty of the model prediction. It is assumed that all sources of uncertainty including input, parameter and model structure uncertainty are explicitly manifested in the model residuals. In this research, theses assumptions are relaxed, and the UNEEC method is extended to consider parameter uncertainty as well (abbreviated as UNEEC-P). In UNEEC-P, first we use Monte Carlo (MC) sampling in parameter space to generate N model realizations (each of which is a time series), estimate the prediction quantiles based on the empirical distribution functions of the model residuals considering all the residual realizations, and only then apply the standard UNEEC method that encapsulates the uncertainty of a hydrologic model (expressed by quantiles of the error distribution) in a machine learning model (e.g., ANN). UNEEC-P is applied first to a linear regression model of synthetic data, and then to a real case study of forecasting inflow to lake Lugano in northern Italy. The inflow forecasting model is a stochastic heteroscedastic model (Pianosi and Soncini-Sessa, 2009). The preliminary results show that the UNEEC-P method produces wider uncertainty bounds, which is consistent with the fact that the method considers also parameter uncertainty of the optimal model. In the future UNEEC method will be further extended to consider input and structure uncertainty which will provide more realistic estimation of model predictions.
A tool for efficient, model-independent management optimization under uncertainty
White, Jeremy; Fienen, Michael N.; Barlow, Paul M.; Welter, Dave E.
2018-01-01
To fill a need for risk-based environmental management optimization, we have developed PESTPP-OPT, a model-independent tool for resource management optimization under uncertainty. PESTPP-OPT solves a sequential linear programming (SLP) problem and also implements (optional) efficient, “on-the-fly” (without user intervention) first-order, second-moment (FOSM) uncertainty techniques to estimate model-derived constraint uncertainty. Combined with a user-specified risk value, the constraint uncertainty estimates are used to form chance-constraints for the SLP solution process, so that any optimal solution includes contributions from model input and observation uncertainty. In this way, a “single answer” that includes uncertainty is yielded from the modeling analysis. PESTPP-OPT uses the familiar PEST/PEST++ model interface protocols, which makes it widely applicable to many modeling analyses. The use of PESTPP-OPT is demonstrated with a synthetic, integrated surface-water/groundwater model. The function and implications of chance constraints for this synthetic model are discussed.
NASA Astrophysics Data System (ADS)
Migliavacca, M.; Sonnentag, O.; Keenan, T. F.; Cescatti, A.; O'Keefe, J.; Richardson, A. D.
2012-06-01
Phenology, the timing of recurring life cycle events, controls numerous land surface feedbacks to the climate system through the regulation of exchanges of carbon, water and energy between the biosphere and atmosphere. Terrestrial biosphere models, however, are known to have systematic errors in the simulation of spring phenology, which potentially could propagate to uncertainty in modeled responses to future climate change. Here, we used the Harvard Forest phenology record to investigate and characterize sources of uncertainty in predicting phenology, and the subsequent impacts on model forecasts of carbon and water cycling. Using a model-data fusion approach, we combined information from 20 yr of phenological observations of 11 North American woody species, with 12 leaf bud-burst models that varied in complexity. Akaike's Information Criterion indicated support for spring warming models with photoperiod limitations and, to a lesser extent, models that included chilling requirements. We assessed three different sources of uncertainty in phenological forecasts: parameter uncertainty, model uncertainty, and driver uncertainty. The latter was characterized running the models to 2099 using 2 different IPCC climate scenarios (A1fi vs. B1, i.e. high CO2 emissions vs. low CO2 emissions scenario). Parameter uncertainty was the smallest (average 95% Confidence Interval - CI: 2.4 days century-1 for scenario B1 and 4.5 days century-1 for A1fi), whereas driver uncertainty was the largest (up to 8.4 days century-1 in the simulated trends). The uncertainty related to model structure is also large and the predicted bud-burst trends as well as the shape of the smoothed projections varied among models (±7.7 days century-1 for A1fi, ±3.6 days century-1 for B1). The forecast sensitivity of bud-burst to temperature (i.e. days bud-burst advanced per degree of warming) varied between 2.2 days °C-1 and 5.2 days °C-1 depending on model structure. We quantified the impact of uncertainties in bud-burst forecasts on simulated photosynthetic CO2 uptake and evapotranspiration (ET) using a process-based terrestrial biosphere model. Uncertainty in phenology model structure led to uncertainty in the description of forest seasonality, which accumulated to uncertainty in annual model estimates of gross primary productivity (GPP) and ET of 9.6% and 2.9%, respectively. A sensitivity analysis shows that a variation of ±10 days in bud-burst dates led to a variation of ±5.0% for annual GPP and about ±2.0% for ET. For phenology models, differences among future climate scenarios (i.e. driver) represent the largest source of uncertainty, followed by uncertainties related to model structure, and finally, related to model parameterization. The uncertainties we have quantified will affect the description of the seasonality of ecosystem processes and in particular the simulation of carbon uptake by forest ecosystems, with a larger impact of uncertainties related to phenology model structure, followed by uncertainties related to phenological model parameterization.
Uncertainty analysis of hydrological modeling in a tropical area using different algorithms
NASA Astrophysics Data System (ADS)
Rafiei Emam, Ammar; Kappas, Martin; Fassnacht, Steven; Linh, Nguyen Hoang Khanh
2018-01-01
Hydrological modeling outputs are subject to uncertainty resulting from different sources of errors (e.g., error in input data, model structure, and model parameters), making quantification of uncertainty in hydrological modeling imperative and meant to improve reliability of modeling results. The uncertainty analysis must solve difficulties in calibration of hydrological models, which further increase in areas with data scarcity. The purpose of this study is to apply four uncertainty analysis algorithms to a semi-distributed hydrological model, quantifying different source of uncertainties (especially parameter uncertainty) and evaluate their performance. In this study, the Soil and Water Assessment Tools (SWAT) eco-hydrological model was implemented for the watershed in the center of Vietnam. The sensitivity of parameters was analyzed, and the model was calibrated. The uncertainty analysis for the hydrological model was conducted based on four algorithms: Generalized Likelihood Uncertainty Estimation (GLUE), Sequential Uncertainty Fitting (SUFI), Parameter Solution method (ParaSol) and Particle Swarm Optimization (PSO). The performance of the algorithms was compared using P-factor and Rfactor, coefficient of determination (R 2), the Nash Sutcliffe coefficient of efficiency (NSE) and Percent Bias (PBIAS). The results showed the high performance of SUFI and PSO with P-factor>0.83, R-factor <0.56 and R 2>0.91, NSE>0.89, and 0.18
Computational intelligence in earth sciences and environmental applications: issues and challenges.
Cherkassky, V; Krasnopolsky, V; Solomatine, D P; Valdes, J
2006-03-01
This paper introduces a generic theoretical framework for predictive learning, and relates it to data-driven and learning applications in earth and environmental sciences. The issues of data quality, selection of the error function, incorporation of the predictive learning methods into the existing modeling frameworks, expert knowledge, model uncertainty, and other application-domain specific problems are discussed. A brief overview of the papers in the Special Issue is provided, followed by discussion of open issues and directions for future research.
A methodological framework to assess the carbon balance of tropical managed forests.
Piponiot, Camille; Cabon, Antoine; Descroix, Laurent; Dourdain, Aurélie; Mazzei, Lucas; Ouliac, Benjamin; Rutishauser, Ervan; Sist, Plinio; Hérault, Bruno
2016-12-01
Managed forests are a major component of tropical landscapes. Production forests as designated by national forest services cover up to 400 million ha, i.e. half of the forested area in the humid tropics. Forest management thus plays a major role in the global carbon budget, but with a lack of unified method to estimate carbon fluxes from tropical managed forests. In this study we propose a new time- and spatially-explicit methodology to estimate the above-ground carbon budget of selective logging at regional scale. The yearly balance of a logging unit, i.e. the elementary management unit of a forest estate, is modelled by aggregating three sub-models encompassing (i) emissions from extracted wood, (ii) emissions from logging damage and deforested areas and (iii) carbon storage from post-logging recovery. Models are parametrised and uncertainties are propagated through a MCMC algorithm. As a case study, we used 38 years of National Forest Inventories in French Guiana, northeastern Amazonia, to estimate the above-ground carbon balance (i.e. the net carbon exchange with the atmosphere) of selectively logged forests. Over this period, the net carbon balance of selective logging in the French Guianan Permanent Forest Estate is estimated to be comprised between 0.12 and 1.33 Tg C, with a median value of 0.64 Tg C. Uncertainties over the model could be diminished by improving the accuracy of both logging damage and large woody necromass decay submodels. We propose an innovating carbon accounting framework relying upon basic logging statistics. This flexible tool allows carbon budget of tropical managed forests to be estimated in a wide range of tropical regions.
Garay, József; Csiszár, Villő; Móri, Tamás F; Szilágyi, András; Varga, Zoltán; Számadó, Szabolcs
2018-01-01
Parent-offspring communication remains an unresolved challenge for biologist. The difficulty of the challenge comes from the fact that it is a multifaceted problem with connections to life-history evolution, parent-offspring conflict, kin selection and signalling. Previous efforts mainly focused on modelling resource allocation at the expense of the dynamic interaction during a reproductive season. Here we present a two-stage model of begging where the first stage models the interaction between nestlings and parents within a nest and the second stage models the life-history trade-offs. We show in an asexual population that honest begging results in decreased variance of collected food between siblings, which leads to mean number of surviving offspring. Thus, honest begging can be seen as a special bet-hedging against informational uncertainty, which not just decreases the variance of fitness but also increases the arithmetic mean.
Szilágyi, András; Varga, Zoltán
2018-01-01
Parent-offspring communication remains an unresolved challenge for biologist. The difficulty of the challenge comes from the fact that it is a multifaceted problem with connections to life-history evolution, parent-offspring conflict, kin selection and signalling. Previous efforts mainly focused on modelling resource allocation at the expense of the dynamic interaction during a reproductive season. Here we present a two-stage model of begging where the first stage models the interaction between nestlings and parents within a nest and the second stage models the life-history trade-offs. We show in an asexual population that honest begging results in decreased variance of collected food between siblings, which leads to mean number of surviving offspring. Thus, honest begging can be seen as a special bet-hedging against informational uncertainty, which not just decreases the variance of fitness but also increases the arithmetic mean. PMID:29494630
NASA Astrophysics Data System (ADS)
Sawicka, K.; Breuer, L.; Houska, T.; Santabarbara Ruiz, I.; Heuvelink, G. B. M.
2016-12-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Advances in uncertainty propagation analysis and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the `spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo techniques, as well as several uncertainty visualization functions. Here we will demonstrate that the 'spup' package is an effective and easy-to-use tool to be applied even in a very complex study case, and that it can be used in multi-disciplinary research and model-based decision support. As an example, we use the ecological LandscapeDNDC model to analyse propagation of uncertainties associated with spatial variability of the model driving forces such as rainfall, nitrogen deposition and fertilizer inputs. The uncertainty propagation is analysed for the prediction of emissions of N2O and CO2 for a German low mountainous, agriculturally developed catchment. The study tests the effect of spatial correlations on spatially aggregated model outputs, and could serve as an advice for developing best management practices and model improvement strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greg Ruskauff
2006-06-01
The Pahute Mesa groundwater flow model supports the FFACO UGTA corrective action strategy objective of providing an estimate of the vertical and horizontal extent of contaminant migration for each CAU in order to predict contaminant boundaries. A contaminant boundary is the model-predicted perimeter that defines the extent of radionuclide-contaminated groundwater from underground nuclear testing above background conditions exceeding Safe Drinking Water Act (SDWA) standards. The contaminant boundary will be composed of both a perimeter boundary and a lower hydrostratigraphic unit (HSU) boundary. Additional results showing contaminant concentrations and the location of the contaminant boundary at selected times will also bemore » presented. These times may include the verification period, the end of the five-year proof-of-concept period, as well as other times that are of specific interest. The FFACO (1996) requires that the contaminant transport model predict the contaminant boundary at 1,000 years and “at a 95% level of confidence.” The Pahute Mesa Phase I flow model described in this report provides, through the flow fields derived from alternative hydrostratigraphic framework models (HFMs) and recharge models, one part of the data required to compute the contaminant boundary. Other components include the simplified source term model, which incorporates uncertainty and variability in the factors that control radionuclide release from an underground nuclear test (SNJV, 2004a), and the transport model with the concomitant parameter uncertainty as described in Shaw (2003). The uncertainty in all the above model components will be evaluated to produce the final contaminant boundary. This report documents the development of the groundwater flow model for the Central and Western Pahute Mesa CAUs.« less
Quantifying radar-rainfall uncertainties in urban drainage flow modelling
NASA Astrophysics Data System (ADS)
Rico-Ramirez, M. A.; Liguori, S.; Schellart, A. N. A.
2015-09-01
This work presents the results of the implementation of a probabilistic system to model the uncertainty associated to radar rainfall (RR) estimates and the way this uncertainty propagates through the sewer system of an urban area located in the North of England. The spatial and temporal correlations of the RR errors as well as the error covariance matrix were computed to build a RR error model able to generate RR ensembles that reproduce the uncertainty associated with the measured rainfall. The results showed that the RR ensembles provide important information about the uncertainty in the rainfall measurement that can be propagated in the urban sewer system. The results showed that the measured flow peaks and flow volumes are often bounded within the uncertainty area produced by the RR ensembles. In 55% of the simulated events, the uncertainties in RR measurements can explain the uncertainties observed in the simulated flow volumes. However, there are also some events where the RR uncertainty cannot explain the whole uncertainty observed in the simulated flow volumes indicating that there are additional sources of uncertainty that must be considered such as the uncertainty in the urban drainage model structure, the uncertainty in the urban drainage model calibrated parameters, and the uncertainty in the measured sewer flows.
Grey-Theory-Based Optimization Model of Emergency Logistics Considering Time Uncertainty.
Qiu, Bao-Jian; Zhang, Jiang-Hua; Qi, Yuan-Tao; Liu, Yang
2015-01-01
Natural disasters occur frequently in recent years, causing huge casualties and property losses. Nowadays, people pay more and more attention to the emergency logistics problems. This paper studies the emergency logistics problem with multi-center, multi-commodity, and single-affected-point. Considering that the path near the disaster point may be damaged, the information of the state of the paths is not complete, and the travel time is uncertainty, we establish the nonlinear programming model that objective function is the maximization of time-satisfaction degree. To overcome these drawbacks: the incomplete information and uncertain time, this paper firstly evaluates the multiple roads of transportation network based on grey theory and selects the reliable and optimal path. Then simplify the original model under the scenario that the vehicle only follows the optimal path from the emergency logistics center to the affected point, and use Lingo software to solve it. The numerical experiments are presented to show the feasibility and effectiveness of the proposed method.
Grey-Theory-Based Optimization Model of Emergency Logistics Considering Time Uncertainty
Qiu, Bao-Jian; Zhang, Jiang-Hua; Qi, Yuan-Tao; Liu, Yang
2015-01-01
Natural disasters occur frequently in recent years, causing huge casualties and property losses. Nowadays, people pay more and more attention to the emergency logistics problems. This paper studies the emergency logistics problem with multi-center, multi-commodity, and single-affected-point. Considering that the path near the disaster point may be damaged, the information of the state of the paths is not complete, and the travel time is uncertainty, we establish the nonlinear programming model that objective function is the maximization of time-satisfaction degree. To overcome these drawbacks: the incomplete information and uncertain time, this paper firstly evaluates the multiple roads of transportation network based on grey theory and selects the reliable and optimal path. Then simplify the original model under the scenario that the vehicle only follows the optimal path from the emergency logistics center to the affected point, and use Lingo software to solve it. The numerical experiments are presented to show the feasibility and effectiveness of the proposed method. PMID:26417946
The X-43A Six Degree of Freedom Monte Carlo Analysis
NASA Technical Reports Server (NTRS)
Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger
2008-01-01
This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A inflight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.
The X-43A Six Degree of Freedom Monte Carlo Analysis
NASA Technical Reports Server (NTRS)
Baumann, Ethan; Bahm, Catherine; Strovers, Brian; Beck, Roger; Richard, Michael
2007-01-01
This report provides an overview of the Hyper-X research vehicle Monte Carlo analysis conducted with the six-degree-of-freedom simulation. The methodology and model uncertainties used for the Monte Carlo analysis are presented as permitted. In addition, the process used to select hardware validation test cases from the Monte Carlo data is described. The preflight Monte Carlo analysis indicated that the X-43A control system was robust to the preflight uncertainties and provided the Hyper-X project an important indication that the vehicle would likely be successful in accomplishing the mission objectives. The X-43A in-flight performance is compared to the preflight Monte Carlo predictions and shown to exceed the Monte Carlo bounds in several instances. Possible modeling shortfalls are presented that may account for these discrepancies. The flight control laws and guidance algorithms were robust enough as a result of the preflight Monte Carlo analysis that the unexpected in-flight performance did not have undue consequences. Modeling and Monte Carlo analysis lessons learned are presented.
He, L; Huang, G H; Lu, H W
2010-04-15
Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Haas, Edwin; Santabarbara, Ignacio; Kiese, Ralf; Butterbach-Bahl, Klaus
2017-04-01
Numerical simulation models are increasingly used to estimate greenhouse gas emissions at site to regional / national scale and are outlined as the most advanced methodology (Tier 3) in the framework of UNFCCC reporting. Process-based models incorporate the major processes of the carbon and nitrogen cycle of terrestrial ecosystems and are thus thought to be widely applicable at various conditions and spatial scales. Process based modelling requires high spatial resolution input data on soil properties, climate drivers and management information. The acceptance of model based inventory calculations depends on the assessment of the inventory's uncertainty (model, input data and parameter induced uncertainties). In this study we fully quantify the uncertainty in modelling soil N2O and NO emissions from arable, grassland and forest soils using the biogeochemical model LandscapeDNDC. We address model induced uncertainty (MU) by contrasting two different soil biogeochemistry modules within LandscapeDNDC. The parameter induced uncertainty (PU) was assessed by using joint parameter distributions for key parameters describing microbial C and N turnover processes as obtained by different Bayesian calibration studies for each model configuration. Input data induced uncertainty (DU) was addressed by Bayesian calibration of soil properties, climate drivers and agricultural management practices data. For the MU, DU and PU we performed several hundred simulations each to contribute to the individual uncertainty assessment. For the overall uncertainty quantification we assessed the model prediction probability, followed by sampled sets of input datasets and parameter distributions. Statistical analysis of the simulation results have been used to quantify the overall full uncertainty of the modelling approach. With this study we can contrast the variation in model results to the different sources of uncertainties for each ecosystem. Further we have been able to perform a fully uncertainty analysis for modelling N2O and NO emissions from arable, grassland and forest soils necessary for the comprehensibility of modelling results. We have applied the methodology to a regional inventory to assess the overall modelling uncertainty for a regional N2O and NO emissions inventory for the state of Saxony, Germany.
Modeling uncertainty in computerized guidelines using fuzzy logic.
Jaulent, M. C.; Joyaux, C.; Colombet, I.; Gillois, P.; Degoulet, P.; Chatellier, G.
2001-01-01
Computerized Clinical Practice Guidelines (CPGs) improve quality of care by assisting physicians in their decision making. A number of problems emerges since patients with close characteristics are given contradictory recommendations. In this article, we propose to use fuzzy logic to model uncertainty due to the use of thresholds in CPGs. A fuzzy classification procedure has been developed that provides for each message of the CPG, a strength of recommendation that rates the appropriateness of the recommendation for the patient under consideration. This work is done in the context of a CPG for the diagnosis and the management of hypertension, published in 1997 by the French agency ANAES. A population of 82 patients with mild to moderate hypertension was selected and the results of the classification system were compared to whose given by a classical decision tree. Observed agreement is 86.6% and the variability of recommendations for patients with close characteristics is reduced. PMID:11825196
Host Model Uncertainty in Aerosol Radiative Effects: the AeroCom Prescribed Experiment and Beyond
NASA Astrophysics Data System (ADS)
Stier, Philip; Schutgens, Nick; Bian, Huisheng; Boucher, Olivier; Chin, Mian; Ghan, Steven; Huneeus, Nicolas; Kinne, Stefan; Lin, Guangxing; Myhre, Gunnar; Penner, Joyce; Randles, Cynthia; Samset, Bjorn; Schulz, Michael; Yu, Hongbin; Zhou, Cheng; Bellouin, Nicolas; Ma, Xiaoyan; Yu, Fangqun; Takemura, Toshihiko
2013-04-01
Anthropogenic and natural aerosol radiative effects are recognized to affect global and regional climate. Multi-model "diversity" in estimates of the aerosol radiative effect is often perceived as a measure of the uncertainty in modelling aerosol itself. However, current aerosol models vary considerably in model components relevant for the calculation of aerosol radiative forcings and feedbacks and the associated "host-model uncertainties" are generally convoluted with the actual uncertainty in aerosol modelling. In the AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in eleven participating models. Host model errors in aerosol radiative forcing are largest in regions of uncertain host model components, such as stratocumulus cloud decks or areas with poorly constrained surface albedos, such as sea ice. Our results demonstrate that host model uncertainties are an important component of aerosol forcing uncertainty that require further attention. However, uncertainties in aerosol radiative effects also include short-term and long-term feedback processes that will be systematically explored in future intercomparison studies. Here we will present an overview of the proposals for discussion and results from early scoping studies.
Uncertainty in tsunami sediment transport modeling
Jaffe, Bruce E.; Goto, Kazuhisa; Sugawara, Daisuke; Gelfenbaum, Guy R.; La Selle, SeanPaul M.
2016-01-01
Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. We explore sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami, study site, available input data, sediment grain size, and model. Although uncertainty has the potential to be large, published case studies indicate that both forward and inverse tsunami sediment transport models perform well enough to be useful for deciphering tsunami characteristics, including size, from deposits. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and development of hybrid modeling approaches to exploit the strengths of forward and inverse models.
Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach
NASA Technical Reports Server (NTRS)
Aguilo, Miguel A.; Warner, James E.
2017-01-01
This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.
Doble, Brett; John, Thomas; Thomas, David; Fellowes, Andrew; Fox, Stephen; Lorgelly, Paula
2017-05-01
To identify parameters that drive the cost-effectiveness of precision medicine by comparing the use of multiplex targeted sequencing (MTS) to select targeted therapy based on tumour genomic profiles to either no further testing with chemotherapy or no further testing with best supportive care in the fourth-line treatment of metastatic lung adenocarcinoma. A combined decision tree and Markov model to compare costs, life-years, and quality-adjusted life-years over a ten-year time horizon from an Australian healthcare payer perspective. Data sources included the published literature and a population-based molecular cohort study (Cancer 2015). Uncertainty was assessed using deterministic sensitivity analyses and quantified by estimating expected value of perfect/partial perfect information. Uncertainty due to technological/scientific advancement was assessed through a number of plausible future scenario analyses. Point estimate incremental cost-effective ratios indicate that MTS is not cost-effective for selecting fourth-line treatment of metastatic lung adenocarcinoma. Lower mortality rates during testing and for true positive patients, lower health state utility values for progressive disease, and targeted therapy resulting in reductions in inpatient visits, however, all resulted in more favourable cost-effectiveness estimates for MTS. The expected value to decision makers of removing all current decision uncertainty was estimated to be between AUD 5,962,843 and AUD 13,196,451, indicating that additional research to reduce uncertainty may be a worthwhile investment. Plausible future scenarios analyses revealed limited improvements in cost-effectiveness under scenarios of improved test performance, decreased costs of testing/interpretation, and no biopsy costs/adverse events. Reductions in off-label targeted therapy costs, when considered together with the other scenarios did, however, indicate more favourable cost-effectiveness of MTS. As more clinical evidence is generated for MTS, the model developed should be revisited and cost-effectiveness re-estimated under different testing scenarios to further understand the value of precision medicine and its potential impact on the overall health budget. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; Mayes, Melanie; Parker, Jack C
2010-01-01
We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less
NASA Astrophysics Data System (ADS)
Dieppois, B.; Pohl, B.; Eden, J.; Crétat, J.; Rouault, M.; Keenlyside, N.; New, M. G.
2017-12-01
The water management community has hitherto neglected or underestimated many of the uncertainties in climate impact scenarios, in particular, uncertainties associated with decadal climate variability. Uncertainty in the state-of-the-art global climate models (GCMs) is time-scale-dependant, e.g. stronger at decadal than at interannual timescales, in response to the different parameterizations and to internal climate variability. In addition, non-stationarity in statistical downscaling is widely recognized as a key problem, in which time-scale dependency of predictors plays an important role. As with global climate modelling, therefore, the selection of downscaling methods must proceed with caution to avoid unintended consequences of over-correcting the noise in GCMs (e.g. interpreting internal climate variability as a model bias). GCM outputs from the Coupled Model Intercomparison Project 5 (CMIP5) have therefore first been selected based on their ability to reproduce southern African summer rainfall variability and their teleconnections with Pacific sea-surface temperature across the dominant timescales. In observations, southern African summer rainfall has recently been shown to exhibit significant periodicities at the interannual timescale (2-8 years), quasi-decadal (8-13 years) and inter-decadal (15-28 years) timescales, which can be interpret as the signature of ENSO, the IPO, and the PDO over the region. Most of CMIP5 GCMs underestimate southern African summer rainfall variability and their teleconnections with Pacific SSTs at these three timescales. In addition, according to a more in-depth analysis of historical and pi-control runs, this bias is might result from internal climate variability in some of the CMIP5 GCMs, suggesting potential for bias-corrected prediction based empirical statistical downscaling. A multi-timescale regression based downscaling procedure, which determines the predictors across the different timescales, has thus been used to simulate southern African summer rainfall. This multi-timescale procedure shows much better skills in simulating decadal timescales of variability compared to commonly used statistical downscaling approaches.
NASA Astrophysics Data System (ADS)
Shao, G.; Gallion, J.; Fei, S.
2016-12-01
Sound forest aboveground biomass estimation is required to monitor diverse forest ecosystems and their impacts on the changing climate. Lidar-based regression models provided promised biomass estimations in most forest ecosystems. However, considerable uncertainties of biomass estimations have been reported in the temperate hardwood and hardwood-dominated mixed forests. Varied site productivities in temperate hardwood forests largely diversified height and diameter growth rates, which significantly reduced the correlation between tree height and diameter at breast height (DBH) in mature and complex forests. It is, therefore, difficult to utilize height-based lidar metrics to predict DBH-based field-measured biomass through a simple regression model regardless the variation of site productivity. In this study, we established a multi-dimension nonlinear regression model incorporating lidar metrics and site productivity classes derived from soil features. In the regression model, lidar metrics provided horizontal and vertical structural information and productivity classes differentiated good and poor forest sites. The selection and combination of lidar metrics were discussed. Multiple regression models were employed and compared. Uncertainty analysis was applied to the best fit model. The effects of site productivity on the lidar-based biomass model were addressed.
NASA Technical Reports Server (NTRS)
Schairer, Edward T.; Kushner, Laura K.; Drain, Bethany A.; Heineck, James T.; Durston, Donald A.
2017-01-01
Stereo photogrammetry was used to measure the position and attitude of a slender body of revolution during nozzle-plume/shock-wave interaction tests in the NASA Ames 9- by 7-Ft Supersonic Wind Tunnel. The model support system was designed to allow the model to be placed at many locations in the test section relative to a pressure rail on one sidewall. It included a streamwise traverse as well as a thin blade that offset the model axis from the sting axis. With these features the support system was more flexible than usual resulting in higher-than-usual uncertainty in the position and attitude of the model. Also contributing to this uncertainty were the absence of a balance, so corrections for sting deflections could not be applied, and the wings-vertical orientation of the model, which precluded using a gravity-based accelerometer to measure pitch angle. Therefore, stereo photogrammetry was chosen to provide independent measures of the model position and orientation. This paper describes the photogrammetry system and presents selected results from the test.
Multi-model approach to assess the impact of climate change on runoff
NASA Astrophysics Data System (ADS)
Dams, J.; Nossent, J.; Senbeta, T. B.; Willems, P.; Batelaan, O.
2015-10-01
The assessment of climate change impacts on hydrology is subject to uncertainties related to the climate change scenarios, stochastic uncertainties of the hydrological model and structural uncertainties of the hydrological model. This paper focuses on the contribution of structural uncertainty of hydrological models to the overall uncertainty of the climate change impact assessment. To quantify the structural uncertainty of hydrological models, four physically based hydrological models (SWAT, PRMS and a semi- and fully distributed version of the WetSpa model) are set up for a catchment in Belgium. Each model is calibrated using four different objective functions. Three climate change scenarios with a high, mean and low hydrological impact are statistically perturbed from a large ensemble of climate change scenarios and are used to force the hydrological models. This methodology allows assessing and comparing the uncertainty introduced by the climate change scenarios with the uncertainty introduced by the hydrological model structure. Results show that the hydrological model structure introduces a large uncertainty on both the average monthly discharge and the extreme peak and low flow predictions under the climate change scenarios. For the low impact climate change scenario, the uncertainty range of the mean monthly runoff is comparable to the range of these runoff values in the reference period. However, for the mean and high impact scenarios, this range is significantly larger. The uncertainty introduced by the climate change scenarios is larger than the uncertainty due to the hydrological model structure for the low and mean hydrological impact scenarios, but the reverse is true for the high impact climate change scenario. The mean and high impact scenarios project increasing peak discharges, while the low impact scenario projects increasing peak discharges only for peak events with return periods larger than 1.6 years. All models suggest for all scenarios a decrease of the lowest flows, except for the SWAT model with the mean hydrological impact climate change scenario. The results of this study indicate that besides the uncertainty introduced by the climate change scenarios also the hydrological model structure uncertainty should be taken into account in the assessment of climate change impacts on hydrology. To make it more straightforward and transparent to include model structural uncertainty in hydrological impact studies, there is a need for hydrological modelling tools that allow flexible structures and methods to validate model structures in their ability to assess impacts under unobserved future climatic conditions.
Regional Frequency and Uncertainty Analysis of Extreme Precipitation in Bangladesh
NASA Astrophysics Data System (ADS)
Mortuza, M. R.; Demissie, Y.; Li, H. Y.
2014-12-01
Increased frequency of extreme precipitations, especially those with multiday durations, are responsible for recent urban floods and associated significant losses of lives and infrastructures in Bangladesh. Reliable and routinely updated estimation of the frequency of occurrence of such extreme precipitation events are thus important for developing up-to-date hydraulic structures and stormwater drainage system that can effectively minimize future risk from similar events. In this study, we have updated the intensity-duration-frequency (IDF) curves for Bangladesh using daily precipitation data from 1961 to 2010 and quantified associated uncertainties. Regional frequency analysis based on L-moments is applied on 1-day, 2-day and 5-day annual maximum precipitation series due to its advantages over at-site estimation. The regional frequency approach pools the information from climatologically similar sites to make reliable estimates of quantiles given that the pooling group is homogeneous and of reasonable size. We have used Region of influence (ROI) approach along with homogeneity measure based on L-moments to identify the homogenous pooling groups for each site. Five 3-parameter distributions (i.e., Generalized Logistic, Generalized Extreme value, Generalized Normal, Pearson Type Three, and Generalized Pareto) are used for a thorough selection of appropriate models that fit the sample data. Uncertainties related to the selection of the distributions and historical data are quantified using the Bayesian Model Averaging and Balanced Bootstrap approaches respectively. The results from this study can be used to update the current design and management of hydraulic structures as well as in exploring spatio-temporal variations of extreme precipitation and associated risk.
A selection model for accounting for publication bias in a full network meta-analysis.
Mavridis, Dimitris; Welton, Nicky J; Sutton, Alex; Salanti, Georgia
2014-12-30
Copas and Shi suggested a selection model to explore the potential impact of publication bias via sensitivity analysis based on assumptions for the probability of publication of trials conditional on the precision of their results. Chootrakool et al. extended this model to three-arm trials but did not fully account for the implications of the consistency assumption, and their model is difficult to generalize for complex network structures with more than three treatments. Fitting these selection models within a frequentist setting requires maximization of a complex likelihood function, and identification problems are common. We have previously presented a Bayesian implementation of the selection model when multiple treatments are compared with a common reference treatment. We now present a general model suitable for complex, full network meta-analysis that accounts for consistency when adjusting results for publication bias. We developed a design-by-treatment selection model to describe the mechanism by which studies with different designs (sets of treatments compared in a trial) and precision may be selected for publication. We fit the model in a Bayesian setting because it avoids the numerical problems encountered in the frequentist setting, it is generalizable with respect to the number of treatments and study arms, and it provides a flexible framework for sensitivity analysis using external knowledge. Our model accounts for the additional uncertainty arising from publication bias more successfully compared to the standard Copas model or its previous extensions. We illustrate the methodology using a published triangular network for the failure of vascular graft or arterial patency. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Li, L.; Xu, C.-Y.; Engeland, K.
2012-04-01
With respect to model calibration, parameter estimation and analysis of uncertainty sources, different approaches have been used in hydrological models. Bayesian method is one of the most widely used methods for uncertainty assessment of hydrological models, which incorporates different sources of information into a single analysis through Bayesian theorem. However, none of these applications can well treat the uncertainty in extreme flows of hydrological models' simulations. This study proposes a Bayesian modularization method approach in uncertainty assessment of conceptual hydrological models by considering the extreme flows. It includes a comprehensive comparison and evaluation of uncertainty assessments by a new Bayesian modularization method approach and traditional Bayesian models using the Metropolis Hasting (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions are used in combination with traditional Bayesian: the AR (1) plus Normal and time period independent model (Model 1), the AR (1) plus Normal and time period dependent model (Model 2) and the AR (1) plus multi-normal model (Model 3). The results reveal that (1) the simulations derived from Bayesian modularization method are more accurate with the highest Nash-Sutcliffe efficiency value, and (2) the Bayesian modularization method performs best in uncertainty estimates of entire flows and in terms of the application and computational efficiency. The study thus introduces a new approach for reducing the extreme flow's effect on the discharge uncertainty assessment of hydrological models via Bayesian. Keywords: extreme flow, uncertainty assessment, Bayesian modularization, hydrological model, WASMOD
Grainger, Matthew James; Aramyan, Lusine; Piras, Simone; Quested, Thomas Edward; Righi, Simone; Setti, Marco; Vittuari, Matteo; Stewart, Gavin Bruce
2018-01-01
Food waste from households contributes the greatest proportion to total food waste in developed countries. Therefore, food waste reduction requires an understanding of the socio-economic (contextual and behavioural) factors that lead to its generation within the household. Addressing such a complex subject calls for sound methodological approaches that until now have been conditioned by the large number of factors involved in waste generation, by the lack of a recognised definition, and by limited available data. This work contributes to food waste generation literature by using one of the largest available datasets that includes data on the objective amount of avoidable household food waste, along with information on a series of socio-economic factors. In order to address one aspect of the complexity of the problem, machine learning algorithms (random forests and boruta) for variable selection integrated with linear modelling, model selection and averaging are implemented. Model selection addresses model structural uncertainty, which is not routinely considered in assessments of food waste in literature. The main drivers of food waste in the home selected in the most parsimonious models include household size, the presence of fussy eaters, employment status, home ownership status, and the local authority. Results, regardless of which variable set the models are run on, point toward large households as being a key target element for food waste reduction interventions.
Aramyan, Lusine; Piras, Simone; Quested, Thomas Edward; Righi, Simone; Setti, Marco; Vittuari, Matteo; Stewart, Gavin Bruce
2018-01-01
Food waste from households contributes the greatest proportion to total food waste in developed countries. Therefore, food waste reduction requires an understanding of the socio-economic (contextual and behavioural) factors that lead to its generation within the household. Addressing such a complex subject calls for sound methodological approaches that until now have been conditioned by the large number of factors involved in waste generation, by the lack of a recognised definition, and by limited available data. This work contributes to food waste generation literature by using one of the largest available datasets that includes data on the objective amount of avoidable household food waste, along with information on a series of socio-economic factors. In order to address one aspect of the complexity of the problem, machine learning algorithms (random forests and boruta) for variable selection integrated with linear modelling, model selection and averaging are implemented. Model selection addresses model structural uncertainty, which is not routinely considered in assessments of food waste in literature. The main drivers of food waste in the home selected in the most parsimonious models include household size, the presence of fussy eaters, employment status, home ownership status, and the local authority. Results, regardless of which variable set the models are run on, point toward large households as being a key target element for food waste reduction interventions. PMID:29389949
NASA Astrophysics Data System (ADS)
Nowak, W.; Koch, J.
2014-12-01
Predicting DNAPL fate and transport in heterogeneous aquifers is challenging and subject to an uncertainty that needs to be quantified. Models for this task needs to be equipped with an accurate source zone description, i.e., the distribution of mass of all partitioning phases (DNAPL, water, and soil) in all possible states ((im)mobile, dissolved, and sorbed), mass-transfer algorithms, and the simulation of transport processes in the groundwater. Such detailed models tend to be computationally cumbersome when used for uncertainty quantification. Therefore, a selective choice of the relevant model states, processes, and scales are both sensitive and indispensable. We investigate the questions: what is a meaningful level of model complexity and how to obtain an efficient model framework that is still physically and statistically consistent. In our proposed model, aquifer parameters and the contaminant source architecture are conceptualized jointly as random space functions. The governing processes are simulated in a three-dimensional, highly-resolved, stochastic, and coupled model that can predict probability density functions of mass discharge and source depletion times. We apply a stochastic percolation approach as an emulator to simulate the contaminant source formation, a random walk particle tracking method to simulate DNAPL dissolution and solute transport within the aqueous phase, and a quasi-steady-state approach to solve for DNAPL depletion times. Using this novel model framework, we test whether and to which degree the desired model predictions are sensitive to simplifications often found in the literature. With this we identify that aquifer heterogeneity, groundwater flow irregularity, uncertain and physically-based contaminant source zones, and their mutual interlinkages are indispensable components of a sound model framework.
Uncertainty in the Modeling of Tsunami Sediment Transport
NASA Astrophysics Data System (ADS)
Jaffe, B. E.; Sugawara, D.; Goto, K.; Gelfenbaum, G. R.; La Selle, S.
2016-12-01
Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. A recent study (Jaffe et al., 2016) explores sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami properties, study site characteristics, available input data, sediment grain size, and the model used. Although uncertainty has the potential to be large, case studies for both forward and inverse models have shown that sediment transport modeling provides useful information on tsunami inundation and hydrodynamics that can be used to improve tsunami hazard assessment. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and the development of hybrid modeling approaches to exploit the strengths of forward and inverse models. As uncertainty in tsunami sediment transport modeling is reduced, and with increased ability to quantify uncertainty, the geologic record of tsunamis will become more valuable in the assessment of tsunami hazard. Jaffe, B., Goto, K., Sugawara, D., Gelfenbaum, G., and La Selle, S., "Uncertainty in Tsunami Sediment Transport Modeling", Journal of Disaster Research Vol. 11 No. 4, pp. 647-661, 2016, doi: 10.20965/jdr.2016.p0647 https://www.fujipress.jp/jdr/dr/dsstr001100040647/
Some challenges with statistical inference in adaptive designs.
Hung, H M James; Wang, Sue-Jane; Yang, Peiling
2014-01-01
Adaptive designs have generated a great deal of attention to clinical trial communities. The literature contains many statistical methods to deal with added statistical uncertainties concerning the adaptations. Increasingly encountered in regulatory applications are adaptive statistical information designs that allow modification of sample size or related statistical information and adaptive selection designs that allow selection of doses or patient populations during the course of a clinical trial. For adaptive statistical information designs, a few statistical testing methods are mathematically equivalent, as a number of articles have stipulated, but arguably there are large differences in their practical ramifications. We pinpoint some undesirable features of these methods in this work. For adaptive selection designs, the selection based on biomarker data for testing the correlated clinical endpoints may increase statistical uncertainty in terms of type I error probability, and most importantly the increased statistical uncertainty may be impossible to assess.
Multifidelity, Multidisciplinary Design Under Uncertainty with Non-Intrusive Polynomial Chaos
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Gumbert, Clyde
2017-01-01
The primary objective of this work is to develop an approach for multifidelity uncertainty quantification and to lay the framework for future design under uncertainty efforts. In this study, multifidelity is used to describe both the fidelity of the modeling of the physical systems, as well as the difference in the uncertainty in each of the models. For computational efficiency, a multifidelity surrogate modeling approach based on non-intrusive polynomial chaos using the point-collocation technique is developed for the treatment of both multifidelity modeling and multifidelity uncertainty modeling. Two stochastic model problems are used to demonstrate the developed methodologies: a transonic airfoil model and multidisciplinary aircraft analysis model. The results of both showed the multifidelity modeling approach was able to predict the output uncertainty predicted by the high-fidelity model as a significant reduction in computational cost.
Probabilistic accounting of uncertainty in forecasts of species distributions under climate change
Wenger, Seth J.; Som, Nicholas A.; Dauwalter, Daniel C.; Isaak, Daniel J.; Neville, Helen M.; Luce, Charles H.; Dunham, Jason B.; Young, Michael K.; Fausch, Kurt D.; Rieman, Bruce E.
2013-01-01
Forecasts of species distributions under future climates are inherently uncertain, but there have been few attempts to describe this uncertainty comprehensively in a probabilistic manner. We developed a Monte Carlo approach that accounts for uncertainty within generalized linear regression models (parameter uncertainty and residual error), uncertainty among competing models (model uncertainty), and uncertainty in future climate conditions (climate uncertainty) to produce site-specific frequency distributions of occurrence probabilities across a species’ range. We illustrated the method by forecasting suitable habitat for bull trout (Salvelinus confluentus) in the Interior Columbia River Basin, USA, under recent and projected 2040s and 2080s climate conditions. The 95% interval of total suitable habitat under recent conditions was estimated at 30.1–42.5 thousand km; this was predicted to decline to 0.5–7.9 thousand km by the 2080s. Projections for the 2080s showed that the great majority of stream segments would be unsuitable with high certainty, regardless of the climate data set or bull trout model employed. The largest contributor to uncertainty in total suitable habitat was climate uncertainty, followed by parameter uncertainty and model uncertainty. Our approach makes it possible to calculate a full distribution of possible outcomes for a species, and permits ready graphical display of uncertainty for individual locations and of total habitat.
NASA Astrophysics Data System (ADS)
Elshall, A. S.; Ye, M.; Niu, G. Y.; Barron-Gafford, G.
2015-12-01
Models in biogeoscience involve uncertainties in observation data, model inputs, model structure, model processes and modeling scenarios. To accommodate for different sources of uncertainty, multimodal analysis such as model combination, model selection, model elimination or model discrimination are becoming more popular. To illustrate theoretical and practical challenges of multimodal analysis, we use an example about microbial soil respiration modeling. Global soil respiration releases more than ten times more carbon dioxide to the atmosphere than all anthropogenic emissions. Thus, improving our understanding of microbial soil respiration is essential for improving climate change models. This study focuses on a poorly understood phenomena, which is the soil microbial respiration pulses in response to episodic rainfall pulses (the "Birch effect"). We hypothesize that the "Birch effect" is generated by the following three mechanisms. To test our hypothesis, we developed and assessed five evolving microbial-enzyme models against field measurements from a semiarid Savannah that is characterized by pulsed precipitation. These five model evolve step-wise such that the first model includes none of these three mechanism, while the fifth model includes the three mechanisms. The basic component of Bayesian multimodal analysis is the estimation of marginal likelihood to rank the candidate models based on their overall likelihood with respect to observation data. The first part of the study focuses on using this Bayesian scheme to discriminate between these five candidate models. The second part discusses some theoretical and practical challenges, which are mainly the effect of likelihood function selection and the marginal likelihood estimation methods on both model ranking and Bayesian model averaging. The study shows that making valid inference from scientific data is not a trivial task, since we are not only uncertain about the candidate scientific models, but also about the statistical methods that are used to discriminate between these models.
Eddington's demon: inferring galaxy mass functions and other distributions from uncertain data
NASA Astrophysics Data System (ADS)
Obreschkow, D.; Murray, S. G.; Robotham, A. S. G.; Westmeier, T.
2018-03-01
We present a general modified maximum likelihood (MML) method for inferring generative distribution functions from uncertain and biased data. The MML estimator is identical to, but easier and many orders of magnitude faster to compute than the solution of the exact Bayesian hierarchical modelling of all measurement errors. As a key application, this method can accurately recover the mass function (MF) of galaxies, while simultaneously dealing with observational uncertainties (Eddington bias), complex selection functions and unknown cosmic large-scale structure. The MML method is free of binning and natively accounts for small number statistics and non-detections. Its fast implementation in the R-package dftools is equally applicable to other objects, such as haloes, groups, and clusters, as well as observables other than mass. The formalism readily extends to multidimensional distribution functions, e.g. a Choloniewski function for the galaxy mass-angular momentum distribution, also handled by dftools. The code provides uncertainties and covariances for the fitted model parameters and approximate Bayesian evidences. We use numerous mock surveys to illustrate and test the MML method, as well as to emphasize the necessity of accounting for observational uncertainties in MFs of modern galaxy surveys.
NASA Astrophysics Data System (ADS)
Herrera-Basurto, R.; Mercader-Trejo, F.; Muñoz-Madrigal, N.; Juárez-García, J. M.; Rodriguez-López, A.; Manzano-Ramírez, A.
2016-07-01
The main goal of method validation is to demonstrate that the method is suitable for its intended purpose. One of the advantages of analytical method validation is translated into a level of confidence about the measurement results reported to satisfy a specific objective. Elemental composition determination by wavelength dispersive spectrometer (WDS) microanalysis has been used over extremely wide areas, mainly in the field of materials science, impurity determinations in geological, biological and food samples. However, little information is reported about the validation of the applied methods. Herein, results of the in-house method validation for elemental composition determination by WDS are shown. SRM 482, a binary alloy Cu-Au of different compositions, was used during the validation protocol following the recommendations for method validation proposed by Eurachem. This paper can be taken as a reference for the evaluation of the validation parameters more frequently requested to get the accreditation under the requirements of the ISO/IEC 17025 standard: selectivity, limit of detection, linear interval, sensitivity, precision, trueness and uncertainty. A model for uncertainty estimation was proposed including systematic and random errors. In addition, parameters evaluated during the validation process were also considered as part of the uncertainty model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jie; Jain, Rishabh; Hodge, Bri-Mathias
A data-driven methodology is developed to analyze how ambient and wake turbulence affect the power generation of wind turbine(s). Using supervisory control and data acquisition (SCADA) data from a wind plant, we select two sets of wind velocity and power data for turbines on the edge of the plant that resemble (i) an out-of-wake scenario and (ii) an in-wake scenario. For each set of data, two surrogate models are developed to represent the turbine(s) power generation as a function of (i) the wind speed and (ii) the wind speed and turbulence intensity. Three types of uncertainties in turbine(s) power generationmore » are investigated: (i) the uncertainty in power generation with respect to the reported power curve; (ii) the uncertainty in power generation with respect to the estimated power response that accounts for only mean wind speed; and (iii) the uncertainty in power generation with respect to the estimated power response that accounts for both mean wind speed and turbulence intensity. Results show that (i) the turbine(s) generally produce more power under the in-wake scenario than under the out-of-wake scenario with the same wind speed; and (ii) there is relatively more uncertainty in the power generation under the in-wake scenario than under the out-of-wake scenario.« less
NASA Astrophysics Data System (ADS)
Berthet, Lionel; Marty, Renaud; Bourgin, François; Viatgé, Julie; Piotte, Olivier; Perrin, Charles
2017-04-01
An increasing number of operational flood forecasting centres assess the predictive uncertainty associated with their forecasts and communicate it to the end users. This information can match the end-users needs (i.e. prove to be useful for an efficient crisis management) only if it is reliable: reliability is therefore a key quality for operational flood forecasts. In 2015, the French flood forecasting national and regional services (Vigicrues network; www.vigicrues.gouv.fr) implemented a framework to compute quantitative discharge and water level forecasts and to assess the predictive uncertainty. Among the possible technical options to achieve this goal, a statistical analysis of past forecasting errors of deterministic models has been selected (QUOIQUE method, Bourgin, 2014). It is a data-based and non-parametric approach based on as few assumptions as possible about the forecasting error mathematical structure. In particular, a very simple assumption is made regarding the predictive uncertainty distributions for large events outside the range of the calibration data: the multiplicative error distribution is assumed to be constant, whatever the magnitude of the flood. Indeed, the predictive distributions may not be reliable in extrapolation. However, estimating the predictive uncertainty for these rare events is crucial when major floods are of concern. In order to improve the forecasts reliability for major floods, an attempt at combining the operational strength of the empirical statistical analysis and a simple error modelling is done. Since the heteroscedasticity of forecast errors can considerably weaken the predictive reliability for large floods, this error modelling is based on the log-sinh transformation which proved to reduce significantly the heteroscedasticity of the transformed error in a simulation context, even for flood peaks (Wang et al., 2012). Exploratory tests on some operational forecasts issued during the recent floods experienced in France (major spring floods in June 2016 on the Loire river tributaries and flash floods in fall 2016) will be shown and discussed. References Bourgin, F. (2014). How to assess the predictive uncertainty in hydrological modelling? An exploratory work on a large sample of watersheds, AgroParisTech Wang, Q. J., Shrestha, D. L., Robertson, D. E. and Pokhrel, P (2012). A log-sinh transformation for data normalization and variance stabilization. Water Resources Research, , W05514, doi:10.1029/2011WR010973
Radar Altimetry for Hydrological Modeling and Monitoring in the Zambezi River Basin
NASA Astrophysics Data System (ADS)
Michailovsky, C. I.; Berry, P. A.; Smith, R. G.; Bauer-Gottwein, P.
2011-12-01
Hydrological model forecasts are subject to large uncertainties stemming from uncertain input data, model structure, parameterization and lack of sufficient calibration/validation data. For real-time or near-real-time applications data assimilation techniques such as the Ensemble Kalman Filter (EnKF) can be used to reduce forecast uncertainty by updating model states as new data becomes available. The use of remote sensing data is attractive for such applications as it provides wide geographical coverage and continuous time-series without the typically long delays that exist in obtaining in-situ data. River discharge is one of the main hydrological variables of interest, and while it cannot currently be directly measured remotely, water levels in rivers can be obtained from satellite based radar altimetry and converted to discharge through rating curves. This study aims to give a realistic assessment of the improvements that can be derived from the use of satellite radar altimetry measurements from the Envisat mission for discharge monitoring and modeling on the basin scale for the Zambezi River. The altimetry data used is the Radar AlTimetry (RAT) product developed at the Earth and Planetary Remote Sensing Laboratory at the De Montfort University. The first step in analyzing the data is the determination of potential altimetry targets which are the locations at which the Envisat orbit and the river network cross in order to select data points corresponding to surface water. The quality of the water level time-series is then analyzed for all targets and the exploitable targets identified. Rating curves are derived from in-situ or remotely-sensed data depending on data-availability at the various locations and discharge time-series are established. A Monte Carlo analysis is carried out to assess the uncertainties on the computed discharge. It was found that having a single cross-section and associated discharge measurement at one point in time significantly reduces discharge uncertainty. To assess improvements in model predictions, a model of the Zambezi River basin based on remote sensing data is set up with the Soil and Water Assessment Tool and calibrated with available in-situ data. The discharge data from altimetry is then used in an EnKF framework to update discharge in the model as it runs. The method showed improvements in prediction uncertainties for short lead times.
Identifying influences on model uncertainty: an application using a forest carbon budget model
James E. Smith; Linda S. Heath
2001-01-01
Uncertainty is an important consideration for both developers and users of environmental simulation models. Establishing quantitative estimates of uncertainty for deterministic models can be difficult when the underlying bases for such information are scarce. We demonstrate an application of probabilistic uncertainty analysis that provides for refinements in...
NASA Technical Reports Server (NTRS)
Marvin, Joseph G.; Brown, James L.; Gnoffo, Peter A.
2013-01-01
A database compilation of hypersonic shock-wave/turbulent boundary layer experiments is provided. The experiments selected for the database are either 2D or axisymmetric, and include both compression corner and impinging type SWTBL interactions. The strength of the interactions range from attached to incipient separation to fully separated flows. The experiments were chosen based on criterion to ensure quality of the datasets, to be relevant to NASA's missions and to be useful for validation and uncertainty assessment of CFD Navier-Stokes predictive methods, both now and in the future. An emphasis on datasets selected was on surface pressures and surface heating throughout the interaction, but include some wall shear stress distributions and flowfield profiles. Included, for selected cases, are example CFD grids and setup information, along with surface pressure and wall heating results from simulations using current NASA real-gas Navier-Stokes codes by which future CFD investigators can compare and evaluate physics modeling improvements and validation and uncertainty assessments of future CFD code developments. The experimental database is presented tabulated in the Appendices describing each experiment. The database is also provided in computer-readable ASCII files located on a companion DVD.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2016-12-01
Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.
Ronald E. McRoberts
2005-01-01
Uncertainty in model-based predictions of individual tree diameter growth is attributed to three sources: measurement error for predictor variables, residual variability around model predictions, and uncertainty in model parameter estimates. Monte Carlo simulations are used to propagate the uncertainty from the three sources through a set of diameter growth models to...
Aeroservoelastic Uncertainty Model Identification from Flight Data
NASA Technical Reports Server (NTRS)
Brenner, Martin J.
2001-01-01
Uncertainty modeling is a critical element in the estimation of robust stability margins for stability boundary prediction and robust flight control system development. There has been a serious deficiency to date in aeroservoelastic data analysis with attention to uncertainty modeling. Uncertainty can be estimated from flight data using both parametric and nonparametric identification techniques. The model validation problem addressed in this paper is to identify aeroservoelastic models with associated uncertainty structures from a limited amount of controlled excitation inputs over an extensive flight envelope. The challenge to this problem is to update analytical models from flight data estimates while also deriving non-conservative uncertainty descriptions consistent with the flight data. Multisine control surface command inputs and control system feedbacks are used as signals in a wavelet-based modal parameter estimation procedure for model updates. Transfer function estimates are incorporated in a robust minimax estimation scheme to get input-output parameters and error bounds consistent with the data and model structure. Uncertainty estimates derived from the data in this manner provide an appropriate and relevant representation for model development and robust stability analysis. This model-plus-uncertainty identification procedure is applied to aeroservoelastic flight data from the NASA Dryden Flight Research Center F-18 Systems Research Aircraft.
Bayesian Methods for Effective Field Theories
NASA Astrophysics Data System (ADS)
Wesolowski, Sarah
Microscopic predictions of the properties of atomic nuclei have reached a high level of precision in the past decade. This progress mandates improved uncertainty quantification (UQ) for a robust comparison of experiment with theory. With the uncertainty from many-body methods under control, calculations are now sensitive to the input inter-nucleon interactions. These interactions include parameters that must be fit to experiment, inducing both uncertainty from the fit and from missing physics in the operator structure of the Hamiltonian. Furthermore, the implementation of the inter-nucleon interactions is not unique, which presents the additional problem of assessing results using different interactions. Effective field theories (EFTs) take advantage of a separation of high- and low-energy scales in the problem to form a power-counting scheme that allows the organization of terms in the Hamiltonian based on their expected contribution to observable predictions. This scheme gives a natural framework for quantification of uncertainty due to missing physics. The free parameters of the EFT, called the low-energy constants (LECs), must be fit to data, but in a properly constructed EFT these constants will be natural-sized, i.e., of order unity. The constraints provided by the EFT, namely the size of the systematic uncertainty from truncation of the theory and the natural size of the LECs, are assumed information even before a calculation is performed or a fit is done. Bayesian statistical methods provide a framework for treating uncertainties that naturally incorporates prior information as well as putting stochastic and systematic uncertainties on an equal footing. For EFT UQ Bayesian methods allow the relevant EFT properties to be incorporated quantitatively as prior probability distribution functions (pdfs). Following the logic of probability theory, observable quantities and underlying physical parameters such as the EFT breakdown scale may be expressed as pdfs that incorporate the prior pdfs. Problems of model selection, such as distinguishing between competing EFT implementations, are also natural in a Bayesian framework. In this thesis we focus on two complementary topics for EFT UQ using Bayesian methods--quantifying EFT truncation uncertainty and parameter estimation for LECs. Using the order-by-order calculations and underlying EFT constraints as prior information, we show how to estimate EFT truncation uncertainties. We then apply the result to calculating truncation uncertainties on predictions of nucleon-nucleon scattering in chiral effective field theory. We apply model-checking diagnostics to our calculations to ensure that the statistical model of truncation uncertainty produces consistent results. A framework for EFT parameter estimation based on EFT convergence properties and naturalness is developed which includes a series of diagnostics to ensure the extraction of the maximum amount of available information from data to estimate LECs with minimal bias. We develop this framework using model EFTs and apply it to the problem of extrapolating lattice quantum chromodynamics results for the nucleon mass. We then apply aspects of the parameter estimation framework to perform case studies in chiral EFT parameter estimation, investigating a possible operator redundancy at fourth order in the chiral expansion and the appropriate inclusion of truncation uncertainty in estimating LECs.
Smith, Des H.V.; Converse, Sarah J.; Gibson, Keith; Moehrenschlager, Axel; Link, William A.; Olsen, Glenn H.; Maguire, Kelly
2011-01-01
Captive breeding is key to management of severely endangered species, but maximizing captive production can be challenging because of poor knowledge of species breeding biology and the complexity of evaluating different management options. In the face of uncertainty and complexity, decision-analytic approaches can be used to identify optimal management options for maximizing captive production. Building decision-analytic models requires iterations of model conception, data analysis, model building and evaluation, identification of remaining uncertainty, further research and monitoring to reduce uncertainty, and integration of new data into the model. We initiated such a process to maximize captive production of the whooping crane (Grus americana), the world's most endangered crane, which is managed through captive breeding and reintroduction. We collected 15 years of captive breeding data from 3 institutions and used Bayesian analysis and model selection to identify predictors of whooping crane hatching success. The strongest predictor, and that with clear management relevance, was incubation environment. The incubation period of whooping crane eggs is split across two environments: crane nests and artificial incubators. Although artificial incubators are useful for allowing breeding pairs to produce multiple clutches, our results indicate that crane incubation is most effective at promoting hatching success. Hatching probability increased the longer an egg spent in a crane nest, from 40% hatching probability for eggs receiving 1 day of crane incubation to 95% for those receiving 30 days (time incubated in each environment varied independently of total incubation period). Because birds will lay fewer eggs when they are incubating longer, a tradeoff exists between the number of clutches produced and egg hatching probability. We developed a decision-analytic model that estimated 16 to be the optimal number of days of crane incubation needed to maximize the number of offspring produced. These results show that using decision-analytic tools to account for uncertainty in captive breeding can improve the rate at which such programs contribute to wildlife reintroductions.
Comparison of Aircraft Models and Integration Schemes for Interval Management in the TRACON
NASA Technical Reports Server (NTRS)
Neogi, Natasha; Hagen, George E.; Herencia-Zapana, Heber
2012-01-01
Reusable models of common elements for communication, computation, decision and control in air traffic management are necessary in order to enable simulation, analysis and assurance of emergent properties, such as safety and stability, for a given operational concept. Uncertainties due to faults, such as dropped messages, along with non-linearities and sensor noise are an integral part of these models, and impact emergent system behavior. Flight control algorithms designed using a linearized version of the flight mechanics will exhibit error due to model uncertainty, and may not be stable outside a neighborhood of the given point of linearization. Moreover, the communication mechanism by which the sensed state of an aircraft is fed back to a flight control system (such as an ADS-B message) impacts the overall system behavior; both due to sensor noise as well as dropped messages (vacant samples). Additionally simulation of the flight controller system can exhibit further numerical instability, due to selection of the integration scheme and approximations made in the flight dynamics. We examine the theoretical and numerical stability of a speed controller under the Euler and Runge-Kutta schemes of integration, for the Maintain phase for a Mid-Term (2035-2045) Interval Management (IM) Operational Concept for descent and landing operations. We model uncertainties in communication due to missed ADS-B messages by vacant samples in the integration schemes, and compare the emergent behavior of the system, in terms of stability, via the boundedness of the final system state. Any bound on the errors incurred by these uncertainties will play an essential part in a composable assurance argument required for real-time, flight-deck guidance and control systems,. Thus, we believe that the creation of reusable models, which possess property guarantees, such as safety and stability, is an innovative and essential requirement to assessing the emergent properties of novel airspace concepts of operation.
Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, J.; Polly, B.; Collis, J.
2013-09-01
This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less