Sample records for estimation uncertainty quantification

  1. Uncertainty quantification metrics for whole product life cycle cost estimates in aerospace innovation

    NASA Astrophysics Data System (ADS)

    Schwabe, O.; Shehab, E.; Erkoyuncu, J.

    2015-08-01

    The lack of defensible methods for quantifying cost estimate uncertainty over the whole product life cycle of aerospace innovations such as propulsion systems or airframes poses a significant challenge to the creation of accurate and defensible cost estimates. Based on the axiomatic definition of uncertainty as the actual prediction error of the cost estimate, this paper provides a comprehensive overview of metrics used for the uncertainty quantification of cost estimates based on a literature review, an evaluation of publicly funded projects such as part of the CORDIS or Horizon 2020 programs, and an analysis of established approaches used by organizations such NASA, the U.S. Department of Defence, the ESA, and various commercial companies. The metrics are categorized based on their foundational character (foundations), their use in practice (state-of-practice), their availability for practice (state-of-art) and those suggested for future exploration (state-of-future). Insights gained were that a variety of uncertainty quantification metrics exist whose suitability depends on the volatility of available relevant information, as defined by technical and cost readiness level, and the number of whole product life cycle phases the estimate is intended to be valid for. Information volatility and number of whole product life cycle phases can hereby be considered as defining multi-dimensional probability fields admitting various uncertainty quantification metric families with identifiable thresholds for transitioning between them. The key research gaps identified were the lacking guidance grounded in theory for the selection of uncertainty quantification metrics and lacking practical alternatives to metrics based on the Central Limit Theorem. An innovative uncertainty quantification framework consisting of; a set-theory based typology, a data library, a classification system, and a corresponding input-output model are put forward to address this research gap as the basis for future work in this field.

  2. Bayesian forecasting and uncertainty quantifying of stream flows using Metropolis-Hastings Markov Chain Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Hongrui; Wang, Cheng; Wang, Ying; Gao, Xiong; Yu, Chen

    2017-06-01

    This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLE confidence interval and thus more precise estimation by using the related information from regional gage stations. The Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.

  3. Bayesian forecasting and uncertainty quantifying of stream flows using Metropolis–Hastings Markov Chain Monte Carlo algorithm

    DOE PAGES

    Wang, Hongrui; Wang, Cheng; Wang, Ying; ...

    2017-04-05

    This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLEmore » confidence interval and thus more precise estimation by using the related information from regional gage stations. As a result, the Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.« less

  4. Quantifying errors without random sampling.

    PubMed

    Phillips, Carl V; LaPole, Luwanna M

    2003-06-12

    All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.

  5. Uncertainty quantification in volumetric Particle Image Velocimetry

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sayantan; Charonko, John; Vlachos, Pavlos

    2016-11-01

    Particle Image Velocimetry (PIV) uncertainty quantification is challenging due to coupled sources of elemental uncertainty and complex data reduction procedures in the measurement chain. Recent developments in this field have led to uncertainty estimation methods for planar PIV. However, no framework exists for three-dimensional volumetric PIV. In volumetric PIV the measurement uncertainty is a function of reconstructed three-dimensional particle location that in turn is very sensitive to the accuracy of the calibration mapping function. Furthermore, the iterative correction to the camera mapping function using triangulated particle locations in space (volumetric self-calibration) has its own associated uncertainty due to image noise and ghost particle reconstructions. Here we first quantify the uncertainty in the triangulated particle position which is a function of particle detection and mapping function uncertainty. The location uncertainty is then combined with the three-dimensional cross-correlation uncertainty that is estimated as an extension of the 2D PIV uncertainty framework. Finally the overall measurement uncertainty is quantified using an uncertainty propagation equation. The framework is tested with both simulated and experimental cases. For the simulated cases the variation of estimated uncertainty with the elemental volumetric PIV error sources are also evaluated. The results show reasonable prediction of standard uncertainty with good coverage.

  6. Uncertainty Quantification for Robust Control of Wind Turbines using Sliding Mode Observer

    NASA Astrophysics Data System (ADS)

    Schulte, Horst

    2016-09-01

    A new quantification method of uncertain models for robust wind turbine control using sliding-mode techniques is presented with the objective to improve active load mitigation. This approach is based on the so-called equivalent output injection signal, which corresponds to the average behavior of the discontinuous switching term, establishing and maintaining a motion on a so-called sliding surface. The injection signal is directly evaluated to obtain estimates of the uncertainty bounds of external disturbances and parameter uncertainties. The applicability of the proposed method is illustrated by the quantification of a four degree-of-freedom model of the NREL 5MW reference turbine containing uncertainties.

  7. Self-optimized construction of transition rate matrices from accelerated atomistic simulations with Bayesian uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Swinburne, Thomas D.; Perez, Danny

    2018-05-01

    A massively parallel method to build large transition rate matrices from temperature-accelerated molecular dynamics trajectories is presented. Bayesian Markov model analysis is used to estimate the expected residence time in the known state space, providing crucial uncertainty quantification for higher-scale simulation schemes such as kinetic Monte Carlo or cluster dynamics. The estimators are additionally used to optimize where exploration is performed and the degree of temperature acceleration on the fly, giving an autonomous, optimal procedure to explore the state space of complex systems. The method is tested against exactly solvable models and used to explore the dynamics of C15 interstitial defects in iron. Our uncertainty quantification scheme allows for accurate modeling of the evolution of these defects over timescales of several seconds.

  8. ARM Best Estimate Data (ARMBE) Products for Climate Science for a Sustainable Energy Future (CSSEF)

    DOE Data Explorer

    Riihimaki, Laura; Gaustad, Krista; McFarlane, Sally

    2014-06-12

    This data set was created for the Climate Science for a Sustainable Energy Future (CSSEF) model testbed project and is an extension of the hourly average ARMBE dataset to other extended facility sites and to include uncertainty estimates. Uncertainty estimates were needed in order to use uncertainty quantification (UQ) techniques with the data.

  9. Adjoint-Based Uncertainty Quantification with MCNP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seifried, Jeffrey E.

    2011-09-01

    This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence inmore » the simulation is acquired.« less

  10. An Uncertainty Quantification Framework for Prognostics and Condition-Based Monitoring

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Goebel, Kai

    2014-01-01

    This paper presents a computational framework for uncertainty quantification in prognostics in the context of condition-based monitoring of aerospace systems. The different sources of uncertainty and the various uncertainty quantification activities in condition-based prognostics are outlined in detail, and it is demonstrated that the Bayesian subjective approach is suitable for interpreting uncertainty in online monitoring. A state-space model-based framework for prognostics, that can rigorously account for the various sources of uncertainty, is presented. Prognostics consists of two important steps. First, the state of the system is estimated using Bayesian tracking, and then, the future states of the system are predicted until failure, thereby computing the remaining useful life of the system. The proposed framework is illustrated using the power system of a planetary rover test-bed, which is being developed and studied at NASA Ames Research Center.

  11. Application of a Monte Carlo framework with bootstrapping for quantification of uncertainty in baseline map of carbon emissions from deforestation in Tropical Regions

    Treesearch

    William Salas; Steve Hagen

    2013-01-01

    This presentation will provide an overview of an approach for quantifying uncertainty in spatial estimates of carbon emission from land use change. We generate uncertainty bounds around our final emissions estimate using a randomized, Monte Carlo (MC)-style sampling technique. This approach allows us to combine uncertainty from different sources without making...

  12. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  13. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  14. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    NASA Astrophysics Data System (ADS)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

    2018-03-01

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  15. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE PAGES

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; ...

    2018-02-09

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  16. Quantification of uncertainty in flood risk assessment for flood protection planning: a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Dittes, Beatrice; Špačková, Olga; Ebrahimian, Negin; Kaiser, Maria; Rieger, Wolfgang; Disse, Markus; Straub, Daniel

    2017-04-01

    Flood risk estimates are subject to significant uncertainties, e.g. due to limited records of historic flood events, uncertainty in flood modeling, uncertain impact of climate change or uncertainty in the exposure and loss estimates. In traditional design of flood protection systems, these uncertainties are typically just accounted for implicitly, based on engineering judgment. In the AdaptRisk project, we develop a fully quantitative framework for planning of flood protection systems under current and future uncertainties using quantitative pre-posterior Bayesian decision analysis. In this contribution, we focus on the quantification of the uncertainties and study their relative influence on the flood risk estimate and on the planning of flood protection systems. The following uncertainty components are included using a Bayesian approach: 1) inherent and statistical (i.e. limited record length) uncertainty; 2) climate uncertainty that can be learned from an ensemble of GCM-RCM models; 3) estimates of climate uncertainty components not covered in 2), such as bias correction, incomplete ensemble, local specifics not captured by the GCM-RCM models; 4) uncertainty in the inundation modelling; 5) uncertainty in damage estimation. We also investigate how these uncertainties are possibly reduced in the future when new evidence - such as new climate models, observed extreme events, and socio-economic data - becomes available. Finally, we look into how this new evidence influences the risk assessment and effectivity of flood protection systems. We demonstrate our methodology for a pre-alpine catchment in southern Germany: the Mangfall catchment in Bavaria that includes the city of Rosenheim, which suffered significant losses during the 2013 flood event.

  17. Quantification of Emission Factor Uncertainty

    EPA Science Inventory

    Emissions factors are important for estimating and characterizing emissions from sources of air pollution. There is no quantitative indication of uncertainty for these emission factors, most factors do not have an adequate data set to compute uncertainty, and it is very difficult...

  18. Uncertainty Quantification and Statistical Convergence Guidelines for PIV Data

    NASA Astrophysics Data System (ADS)

    Stegmeir, Matthew; Kassen, Dan

    2016-11-01

    As Particle Image Velocimetry has continued to mature, it has developed into a robust and flexible technique for velocimetry used by expert and non-expert users. While historical estimates of PIV accuracy have typically relied heavily on "rules of thumb" and analysis of idealized synthetic images, recently increased emphasis has been placed on better quantifying real-world PIV measurement uncertainty. Multiple techniques have been developed to provide per-vector instantaneous uncertainty estimates for PIV measurements. Often real-world experimental conditions introduce complications in collecting "optimal" data, and the effect of these conditions is important to consider when planning an experimental campaign. The current work utilizes the results of PIV Uncertainty Quantification techniques to develop a framework for PIV users to utilize estimated PIV confidence intervals to compute reliable data convergence criteria for optimal sampling of flow statistics. Results are compared using experimental and synthetic data, and recommended guidelines and procedures leveraging estimated PIV confidence intervals for efficient sampling for converged statistics are provided.

  19. Uncertainty Quantification in Alchemical Free Energy Methods.

    PubMed

    Bhati, Agastya P; Wan, Shunzhou; Hu, Yuan; Sherborne, Brad; Coveney, Peter V

    2018-06-12

    Alchemical free energy methods have gained much importance recently from several reports of improved ligand-protein binding affinity predictions based on their implementation using molecular dynamics simulations. A large number of variants of such methods implementing different accelerated sampling techniques and free energy estimators are available, each claimed to be better than the others in its own way. However, the key features of reproducibility and quantification of associated uncertainties in such methods have barely been discussed. Here, we apply a systematic protocol for uncertainty quantification to a number of popular alchemical free energy methods, covering both absolute and relative free energy predictions. We show that a reliable measure of error estimation is provided by ensemble simulation-an ensemble of independent MD simulations-which applies irrespective of the free energy method. The need to use ensemble methods is fundamental and holds regardless of the duration of time of the molecular dynamics simulations performed.

  20. Estimation and impact assessment of input and parameter uncertainty in predicting groundwater flow with a fully distributed model

    NASA Astrophysics Data System (ADS)

    Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke

    2017-04-01

    Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.

  1. Uncertainty quantification of measured quantities for a HCCI engine: composition or temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petitpas, Guillaume; Whitesides, Russell

    UQHCCI_1 computes the measurement uncertainties of a HCCI engine test bench using the pressure trace and the estimated uncertainties of the measured quantities as inputs, then propagating them through Bayesian inference and a mixing model.

  2. Modeling of structural uncertainties in Reynolds-averaged Navier-Stokes closures

    NASA Astrophysics Data System (ADS)

    Emory, Michael; Larsson, Johan; Iaccarino, Gianluca

    2013-11-01

    Estimation of the uncertainty in numerical predictions by Reynolds-averaged Navier-Stokes closures is a vital step in building confidence in such predictions. An approach to model-form uncertainty quantification that does not assume the eddy-viscosity hypothesis to be exact is proposed. The methodology for estimation of uncertainty is demonstrated for plane channel flow, for a duct with secondary flows, and for the shock/boundary-layer interaction over a transonic bump.

  3. Uncertainty quantification applied to the radiological characterization of radioactive waste.

    PubMed

    Zaffora, B; Magistris, M; Saporta, G; Chevalier, J-P

    2017-09-01

    This paper describes the process adopted at the European Organization for Nuclear Research (CERN) to quantify uncertainties affecting the characterization of very-low-level radioactive waste. Radioactive waste is a by-product of the operation of high-energy particle accelerators. Radioactive waste must be characterized to ensure its safe disposal in final repositories. Characterizing radioactive waste means establishing the list of radionuclides together with their activities. The estimated activity levels are compared to the limits given by the national authority of the waste disposal. The quantification of the uncertainty affecting the concentration of the radionuclides is therefore essential to estimate the acceptability of the waste in the final repository but also to control the sorting, volume reduction and packaging phases of the characterization process. The characterization method consists of estimating the activity of produced radionuclides either by experimental methods or statistical approaches. The uncertainties are estimated using classical statistical methods and uncertainty propagation. A mixed multivariate random vector is built to generate random input parameters for the activity calculations. The random vector is a robust tool to account for the unknown radiological history of legacy waste. This analytical technique is also particularly useful to generate random chemical compositions of materials when the trace element concentrations are not available or cannot be measured. The methodology was validated using a waste population of legacy copper activated at CERN. The methodology introduced here represents a first approach for the uncertainty quantification (UQ) of the characterization process of waste produced at particle accelerators. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Classifying the Sizes of Explosive Eruptions using Tephra Deposits: The Advantages of a Numerical Inversion Approach

    NASA Astrophysics Data System (ADS)

    Connor, C.; Connor, L.; White, J.

    2015-12-01

    Explosive volcanic eruptions are often classified by deposit mass and eruption column height. How well are these eruption parameters determined in older deposits, and how well can we reduce uncertainty using robust numerical and statistical methods? We describe an efficient and effective inversion and uncertainty quantification approach for estimating eruption parameters given a dataset of tephra deposit thickness and granulometry. The inversion and uncertainty quantification is implemented using the open-source PEST++ code. Inversion with PEST++ can be used with a variety of forward models and here is applied using Tephra2, a code that simulates advective and dispersive tephra transport and deposition. The Levenburg-Marquardt algorithm is combined with formal Tikhonov and subspace regularization to invert eruption parameters; a linear equation for conditional uncertainty propagation is used to estimate posterior parameter uncertainty. Both the inversion and uncertainty analysis support simultaneous analysis of the full eruption and wind-field parameterization. The combined inversion/uncertainty-quantification approach is applied to the 1992 eruption of Cerro Negro (Nicaragua), the 2011 Kirishima-Shinmoedake (Japan), and the 1913 Colima (Mexico) eruptions. These examples show that although eruption mass uncertainty is reduced by inversion against tephra isomass data, considerable uncertainty remains for many eruption and wind-field parameters, such as eruption column height. Supplementing the inversion dataset with tephra granulometry data is shown to further reduce the uncertainty of most eruption and wind-field parameters. We think the use of such robust models provides a better understanding of uncertainty in eruption parameters, and hence eruption classification, than is possible with more qualitative methods that are widely used.

  5. Quantification for Complex Assessment: Uncertainty Estimation in Final Year Project Thesis Assessment

    ERIC Educational Resources Information Center

    Kim, Ho Sung

    2013-01-01

    A quantitative method for estimating an expected uncertainty (reliability and validity) in assessment results arising from the relativity between four variables, viz examiner's expertise, examinee's expertise achieved, assessment task difficulty and examinee's performance, was developed for the complex assessment applicable to final…

  6. Bayesian Statistics and Uncertainty Quantification for Safety Boundary Analysis in Complex Systems

    NASA Technical Reports Server (NTRS)

    He, Yuning; Davies, Misty Dawn

    2014-01-01

    The analysis of a safety-critical system often requires detailed knowledge of safe regions and their highdimensional non-linear boundaries. We present a statistical approach to iteratively detect and characterize the boundaries, which are provided as parameterized shape candidates. Using methods from uncertainty quantification and active learning, we incrementally construct a statistical model from only few simulation runs and obtain statistically sound estimates of the shape parameters for safety boundaries.

  7. Uncertainty Assessment of Synthetic Design Hydrographs for Gauged and Ungauged Catchments

    NASA Astrophysics Data System (ADS)

    Brunner, Manuela I.; Sikorska, Anna E.; Furrer, Reinhard; Favre, Anne-Catherine

    2018-03-01

    Design hydrographs described by peak discharge, hydrograph volume, and hydrograph shape are essential for engineering tasks involving storage. Such design hydrographs are inherently uncertain as are classical flood estimates focusing on peak discharge only. Various sources of uncertainty contribute to the total uncertainty of synthetic design hydrographs for gauged and ungauged catchments. These comprise model uncertainties, sampling uncertainty, and uncertainty due to the choice of a regionalization method. A quantification of the uncertainties associated with flood estimates is essential for reliable decision making and allows for the identification of important uncertainty sources. We therefore propose an uncertainty assessment framework for the quantification of the uncertainty associated with synthetic design hydrographs. The framework is based on bootstrap simulations and consists of three levels of complexity. On the first level, we assess the uncertainty due to individual uncertainty sources. On the second level, we quantify the total uncertainty of design hydrographs for gauged catchments and the total uncertainty of regionalizing them to ungauged catchments but independently from the construction uncertainty. On the third level, we assess the coupled uncertainty of synthetic design hydrographs in ungauged catchments, jointly considering construction and regionalization uncertainty. We find that the most important sources of uncertainty in design hydrograph construction are the record length and the choice of the flood sampling strategy. The total uncertainty of design hydrographs in ungauged catchments depends on the catchment properties and is not negligible in our case.

  8. Uncertainty Estimation Cheat Sheet for Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    Britton, Paul; Al Hassan, Mohammad; Ring, Robert

    2017-01-01

    Quantitative results for aerospace engineering problems are influenced by many sources of uncertainty. Uncertainty analysis aims to make a technical contribution to decision-making through the quantification of uncertainties in the relevant variables as well as through the propagation of these uncertainties up to the result. Uncertainty can be thought of as a measure of the 'goodness' of a result and is typically represented as statistical dispersion. This paper will explain common measures of centrality and dispersion; and-with examples-will provide guidelines for how they may be estimated to ensure effective technical contributions to decision-making.

  9. Estimation of the quantification uncertainty from flow injection and liquid chromatography transient signals in inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Laborda, Francisco; Medrano, Jesús; Castillo, Juan R.

    2004-06-01

    The quality of the quantitative results obtained from transient signals in high-performance liquid chromatography-inductively coupled plasma mass spectrometry (HPLC-ICPMS) and flow injection-inductively coupled plasma mass spectrometry (FI-ICPMS) was investigated under multielement conditions. Quantification methods were based on multiple-point calibration by simple and weighted linear regression, and double-point calibration (measurement of the baseline and one standard). An uncertainty model, which includes the main sources of uncertainty from FI-ICPMS and HPLC-ICPMS (signal measurement, sample flow rate and injection volume), was developed to estimate peak area uncertainties and statistical weights used in weighted linear regression. The behaviour of the ICPMS instrument was characterized in order to be considered in the model, concluding that the instrument works as a concentration detector when it is used to monitorize transient signals from flow injection or chromatographic separations. Proper quantification by the three calibration methods was achieved when compared to reference materials, although the double-point calibration allowed to obtain results of the same quality as the multiple-point calibration, shortening the calibration time. Relative expanded uncertainties ranged from 10-20% for concentrations around the LOQ to 5% for concentrations higher than 100 times the LOQ.

  10. Uncertainty Estimation Cheat Sheet for Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    Britton, Paul T.; Al Hassan, Mohammad; Ring, Robert W.

    2017-01-01

    "Uncertainty analysis itself is uncertain, therefore, you cannot evaluate it exactly," Source Uncertain Quantitative results for aerospace engineering problems are influenced by many sources of uncertainty. Uncertainty analysis aims to make a technical contribution to decision-making through the quantification of uncertainties in the relevant variables as well as through the propagation of these uncertainties up to the result. Uncertainty can be thought of as a measure of the 'goodness' of a result and is typically represented as statistical dispersion. This paper will explain common measures of centrality and dispersion; and-with examples-will provide guidelines for how they may be estimated to ensure effective technical contributions to decision-making.

  11. Lognormal Uncertainty Estimation for Failure Rates

    NASA Technical Reports Server (NTRS)

    Britton, Paul T.; Al Hassan, Mohammad; Ring, Robert W.

    2017-01-01

    "Uncertainty analysis itself is uncertain, therefore, you cannot evaluate it exactly," Source Uncertain. Quantitative results for aerospace engineering problems are influenced by many sources of uncertainty. Uncertainty analysis aims to make a technical contribution to decision-making through the quantification of uncertainties in the relevant variables as well as through the propagation of these uncertainties up to the result. Uncertainty can be thought of as a measure of the 'goodness' of a result and is typically represented as statistical dispersion. This presentation will explain common measures of centrality and dispersion; and-with examples-will provide guidelines for how they may be estimated to ensure effective technical contributions to decision-making.

  12. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  13. Particle image velocimetry correlation signal-to-noise ratio metrics and measurement uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Xue, Zhenyu; Charonko, John J.; Vlachos, Pavlos P.

    2014-11-01

    In particle image velocimetry (PIV) the measurement signal is contained in the recorded intensity of the particle image pattern superimposed on a variety of noise sources. The signal-to-noise-ratio (SNR) strength governs the resulting PIV cross correlation and ultimately the accuracy and uncertainty of the resulting PIV measurement. Hence we posit that correlation SNR metrics calculated from the correlation plane can be used to quantify the quality of the correlation and the resulting uncertainty of an individual measurement. In this paper we extend the original work by Charonko and Vlachos and present a framework for evaluating the correlation SNR using a set of different metrics, which in turn are used to develop models for uncertainty estimation. Several corrections have been applied in this work. The SNR metrics and corresponding models presented herein are expanded to be applicable to both standard and filtered correlations by applying a subtraction of the minimum correlation value to remove the effect of the background image noise. In addition, the notion of a ‘valid’ measurement is redefined with respect to the correlation peak width in order to be consistent with uncertainty quantification principles and distinct from an ‘outlier’ measurement. Finally the type and significance of the error distribution function is investigated. These advancements lead to more robust and reliable uncertainty estimation models compared with the original work by Charonko and Vlachos. The models are tested against both synthetic benchmark data as well as experimental measurements. In this work, {{U}68.5} uncertainties are estimated at the 68.5% confidence level while {{U}95} uncertainties are estimated at 95% confidence level. For all cases the resulting calculated coverage factors approximate the expected theoretical confidence intervals, thus demonstrating the applicability of these new models for estimation of uncertainty for individual PIV measurements.

  14. Adaptive polynomial chaos techniques for uncertainty quantification of a gas cooled fast reactor transient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perko, Z.; Gilli, L.; Lathouwers, D.

    2013-07-01

    Uncertainty quantification plays an increasingly important role in the nuclear community, especially with the rise of Best Estimate Plus Uncertainty methodologies. Sensitivity analysis, surrogate models, Monte Carlo sampling and several other techniques can be used to propagate input uncertainties. In recent years however polynomial chaos expansion has become a popular alternative providing high accuracy at affordable computational cost. This paper presents such polynomial chaos (PC) methods using adaptive sparse grids and adaptive basis set construction, together with an application to a Gas Cooled Fast Reactor transient. Comparison is made between a new sparse grid algorithm and the traditionally used techniquemore » proposed by Gerstner. An adaptive basis construction method is also introduced and is proved to be advantageous both from an accuracy and a computational point of view. As a demonstration the uncertainty quantification of a 50% loss of flow transient in the GFR2400 Gas Cooled Fast Reactor design was performed using the CATHARE code system. The results are compared to direct Monte Carlo sampling and show the superior convergence and high accuracy of the polynomial chaos expansion. Since PC techniques are easy to implement, they can offer an attractive alternative to traditional techniques for the uncertainty quantification of large scale problems. (authors)« less

  15. On the quantification and efficient propagation of imprecise probabilities resulting from small datasets

    NASA Astrophysics Data System (ADS)

    Zhang, Jiaxin; Shields, Michael D.

    2018-01-01

    This paper addresses the problem of uncertainty quantification and propagation when data for characterizing probability distributions are scarce. We propose a methodology wherein the full uncertainty associated with probability model form and parameter estimation are retained and efficiently propagated. This is achieved by applying the information-theoretic multimodel inference method to identify plausible candidate probability densities and associated probabilities that each method is the best model in the Kullback-Leibler sense. The joint parameter densities for each plausible model are then estimated using Bayes' rule. We then propagate this full set of probability models by estimating an optimal importance sampling density that is representative of all plausible models, propagating this density, and reweighting the samples according to each of the candidate probability models. This is in contrast with conventional methods that try to identify a single probability model that encapsulates the full uncertainty caused by lack of data and consequently underestimate uncertainty. The result is a complete probabilistic description of both aleatory and epistemic uncertainty achieved with several orders of magnitude reduction in computational cost. It is shown how the model can be updated to adaptively accommodate added data and added candidate probability models. The method is applied for uncertainty analysis of plate buckling strength where it is demonstrated how dataset size affects the confidence (or lack thereof) we can place in statistical estimates of response when data are lacking.

  16. The Challenges of Credible Thermal Protection System Reliability Quantification

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.

    2013-01-01

    The paper discusses several of the challenges associated with developing a credible reliability estimate for a human-rated crew capsule thermal protection system. The process of developing such a credible estimate is subject to the quantification, modeling and propagation of numerous uncertainties within a probabilistic analysis. The development of specific investment recommendations, to improve the reliability prediction, among various potential testing and programmatic options is then accomplished through Bayesian analysis.

  17. A fault tree model to assess probability of contaminant discharge from shipwrecks.

    PubMed

    Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M; Lindgren, J F; Dahllöf, I

    2014-11-15

    Shipwrecks on the sea floor around the world may contain hazardous substances that can cause harm to the marine environment. Today there are no comprehensive methods for environmental risk assessment of shipwrecks, and thus there is poor support for decision-making on prioritization of mitigation measures. The purpose of this study was to develop a tool for quantitative risk estimation of potentially polluting shipwrecks, and in particular an estimation of the annual probability of hazardous substance discharge. The assessment of the probability of discharge is performed using fault tree analysis, facilitating quantification of the probability with respect to a set of identified hazardous events. This approach enables a structured assessment providing transparent uncertainty and sensitivity analyses. The model facilitates quantification of risk, quantification of the uncertainties in the risk calculation and identification of parameters to be investigated further in order to obtain a more reliable risk calculation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Aerosol-type retrieval and uncertainty quantification from OMI data

    NASA Astrophysics Data System (ADS)

    Kauppi, Anu; Kolmonen, Pekka; Laine, Marko; Tamminen, Johanna

    2017-11-01

    We discuss uncertainty quantification for aerosol-type selection in satellite-based atmospheric aerosol retrieval. The retrieval procedure uses precalculated aerosol microphysical models stored in look-up tables (LUTs) and top-of-atmosphere (TOA) spectral reflectance measurements to solve the aerosol characteristics. The forward model approximations cause systematic differences between the modelled and observed reflectance. Acknowledging this model discrepancy as a source of uncertainty allows us to produce more realistic uncertainty estimates and assists the selection of the most appropriate LUTs for each individual retrieval.This paper focuses on the aerosol microphysical model selection and characterisation of uncertainty in the retrieved aerosol type and aerosol optical depth (AOD). The concept of model evidence is used as a tool for model comparison. The method is based on Bayesian inference approach, in which all uncertainties are described as a posterior probability distribution. When there is no single best-matching aerosol microphysical model, we use a statistical technique based on Bayesian model averaging to combine AOD posterior probability densities of the best-fitting models to obtain an averaged AOD estimate. We also determine the shared evidence of the best-matching models of a certain main aerosol type in order to quantify how plausible it is that it represents the underlying atmospheric aerosol conditions.The developed method is applied to Ozone Monitoring Instrument (OMI) measurements using a multiwavelength approach for retrieving the aerosol type and AOD estimate with uncertainty quantification for cloud-free over-land pixels. Several larger pixel set areas were studied in order to investigate the robustness of the developed method. We evaluated the retrieved AOD by comparison with ground-based measurements at example sites. We found that the uncertainty of AOD expressed by posterior probability distribution reflects the difficulty in model selection. The posterior probability distribution can provide a comprehensive characterisation of the uncertainty in this kind of problem for aerosol-type selection. As a result, the proposed method can account for the model error and also include the model selection uncertainty in the total uncertainty budget.

  19. Numerical Uncertainty Quantification for Radiation Analysis Tools

    NASA Technical Reports Server (NTRS)

    Anderson, Brooke; Blattnig, Steve; Clowdsley, Martha

    2007-01-01

    Recently a new emphasis has been placed on engineering applications of space radiation analyses and thus a systematic effort of Verification, Validation and Uncertainty Quantification (VV&UQ) of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. There are two sources of uncertainty in geometric discretization addressed in this paper that need to be quantified in order to understand the total uncertainty in estimating space radiation exposures. One source of uncertainty is in ray tracing, as the number of rays increase the associated uncertainty decreases, but the computational expense increases. Thus, a cost benefit analysis optimizing computational time versus uncertainty is needed and is addressed in this paper. The second source of uncertainty results from the interpolation over the dose vs. depth curves that is needed to determine the radiation exposure. The question, then, is what is the number of thicknesses that is needed to get an accurate result. So convergence testing is performed to quantify the uncertainty associated with interpolating over different shield thickness spatial grids.

  20. Uncertainty quantification of effective nuclear interactions

    DOE PAGES

    Pérez, R. Navarro; Amaro, J. E.; Arriola, E. Ruiz

    2016-03-02

    We give a brief review on the development of phenomenological NN interactions and the corresponding quanti cation of statistical uncertainties. We look into the uncertainty of effective interactions broadly used in mean eld calculations through the Skyrme parameters and effective eld theory counter-terms by estimating both statistical and systematic uncertainties stemming from the NN interaction. We also comment on the role played by different tting strategies on the light of recent developments.

  1. Uncertainty quantification of effective nuclear interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pérez, R. Navarro; Amaro, J. E.; Arriola, E. Ruiz

    We give a brief review on the development of phenomenological NN interactions and the corresponding quanti cation of statistical uncertainties. We look into the uncertainty of effective interactions broadly used in mean eld calculations through the Skyrme parameters and effective eld theory counter-terms by estimating both statistical and systematic uncertainties stemming from the NN interaction. We also comment on the role played by different tting strategies on the light of recent developments.

  2. An Uncertainty Quantification Framework for Remote Sensing Retrievals

    NASA Astrophysics Data System (ADS)

    Braverman, A. J.; Hobbs, J.

    2017-12-01

    Remote sensing data sets produced by NASA and other space agencies are the result of complex algorithms that infer geophysical state from observed radiances using retrieval algorithms. The processing must keep up with the downlinked data flow, and this necessitates computational compromises that affect the accuracies of retrieved estimates. The algorithms are also limited by imperfect knowledge of physics and of ancillary inputs that are required. All of this contributes to uncertainties that are generally not rigorously quantified by stepping outside the assumptions that underlie the retrieval methodology. In this talk we discuss a practical framework for uncertainty quantification that can be applied to a variety of remote sensing retrieval algorithms. Ours is a statistical approach that uses Monte Carlo simulation to approximate the sampling distribution of the retrieved estimates. We will discuss the strengths and weaknesses of this approach, and provide a case-study example from the Orbiting Carbon Observatory 2 mission.

  3. Eigenspace perturbations for structural uncertainty estimation of turbulence closure models

    NASA Astrophysics Data System (ADS)

    Jofre, Lluis; Mishra, Aashwin; Iaccarino, Gianluca

    2017-11-01

    With the present state of computational resources, a purely numerical resolution of turbulent flows encountered in engineering applications is not viable. Consequently, investigations into turbulence rely on various degrees of modeling. Archetypal amongst these variable resolution approaches would be RANS models in two-equation closures, and subgrid-scale models in LES. However, owing to the simplifications introduced during model formulation, the fidelity of all such models is limited, and therefore the explicit quantification of the predictive uncertainty is essential. In such scenario, the ideal uncertainty estimation procedure must be agnostic to modeling resolution, methodology, and the nature or level of the model filter. The procedure should be able to give reliable prediction intervals for different Quantities of Interest, over varied flows and flow conditions, and at diametric levels of modeling resolution. In this talk, we present and substantiate the Eigenspace perturbation framework as an uncertainty estimation paradigm that meets these criteria. Commencing from a broad overview, we outline the details of this framework at different modeling resolution. Thence, using benchmark flows, along with engineering problems, the efficacy of this procedure is established. This research was partially supported by NNSA under the Predictive Science Academic Alliance Program (PSAAP) II, and by DARPA under the Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) project (technical monitor: Dr Fariba Fahroo).

  4. Theoretical analysis on the measurement errors of local 2D DIC: Part I temporal and spatial uncertainty quantification of displacement measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yueqi; Lava, Pascal; Reu, Phillip

    This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.

  5. Theoretical analysis on the measurement errors of local 2D DIC: Part I temporal and spatial uncertainty quantification of displacement measurements

    DOE PAGES

    Wang, Yueqi; Lava, Pascal; Reu, Phillip; ...

    2015-12-23

    This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.

  6. How to Make Data a Blessing to Parametric Uncertainty Quantification and Reduction?

    NASA Astrophysics Data System (ADS)

    Ye, M.; Shi, X.; Curtis, G. P.; Kohler, M.; Wu, J.

    2013-12-01

    In a Bayesian point of view, probability of model parameters and predictions are conditioned on data used for parameter inference and prediction analysis. It is critical to use appropriate data for quantifying parametric uncertainty and its propagation to model predictions. However, data are always limited and imperfect. When a dataset cannot properly constrain model parameters, it may lead to inaccurate uncertainty quantification. While in this case data appears to be a curse to uncertainty quantification, a comprehensive modeling analysis may help understand the cause and characteristics of parametric uncertainty and thus turns data into a blessing. In this study, we illustrate impacts of data on uncertainty quantification and reduction using an example of surface complexation model (SCM) developed to simulate uranyl (U(VI)) adsorption. The model includes two adsorption sites, referred to as strong and weak sites. The amount of uranium adsorption on these sites determines both the mean arrival time and the long tail of the breakthrough curves. There is one reaction on the weak site but two reactions on the strong site. The unknown parameters include fractions of the total surface site density of the two sites and surface complex formation constants of the three reactions. A total of seven experiments were conducted with different geochemical conditions to estimate these parameters. The experiments with low initial concentration of U(VI) result in a large amount of parametric uncertainty. A modeling analysis shows that it is because the experiments cannot distinguish the relative adsorption affinity of the strong and weak sites on uranium adsorption. Therefore, the experiments with high initial concentration of U(VI) are needed, because in the experiments the strong site is nearly saturated and the weak site can be determined. The experiments with high initial concentration of U(VI) are a blessing to uncertainty quantification, and the experiments with low initial concentration help modelers turn a curse into a blessing. The data impacts on uncertainty quantification and reduction are quantified using probability density functions of model parameters obtained from Markov Chain Monte Carlo simulation using the DREAM algorithm. This study provides insights to model calibration, uncertainty quantification, experiment design, and data collection in groundwater reactive transport modeling and other environmental modeling.

  7. Information theoretic quantification of diagnostic uncertainty.

    PubMed

    Westover, M Brandon; Eiseman, Nathaniel A; Cash, Sydney S; Bianchi, Matt T

    2012-01-01

    Diagnostic test interpretation remains a challenge in clinical practice. Most physicians receive training in the use of Bayes' rule, which specifies how the sensitivity and specificity of a test for a given disease combine with the pre-test probability to quantify the change in disease probability incurred by a new test result. However, multiple studies demonstrate physicians' deficiencies in probabilistic reasoning, especially with unexpected test results. Information theory, a branch of probability theory dealing explicitly with the quantification of uncertainty, has been proposed as an alternative framework for diagnostic test interpretation, but is even less familiar to physicians. We have previously addressed one key challenge in the practical application of Bayes theorem: the handling of uncertainty in the critical first step of estimating the pre-test probability of disease. This essay aims to present the essential concepts of information theory to physicians in an accessible manner, and to extend previous work regarding uncertainty in pre-test probability estimation by placing this type of uncertainty within a principled information theoretic framework. We address several obstacles hindering physicians' application of information theoretic concepts to diagnostic test interpretation. These include issues of terminology (mathematical meanings of certain information theoretic terms differ from clinical or common parlance) as well as the underlying mathematical assumptions. Finally, we illustrate how, in information theoretic terms, one can understand the effect on diagnostic uncertainty of considering ranges instead of simple point estimates of pre-test probability.

  8. OR14-V-Uncertainty-PD2La Uncertainty Quantification for Nuclear Safeguards and Nondestructive Assay Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, Andrew D.; Croft, Stephen; McElroy, Robert Dennis

    2017-08-01

    The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically provide error bars and also partition total uncertainty into “random” and “systematic” components so that, for example, an error bar can be developed for the total mass estimate in multiple items. Uncertainty Quantification (UQ) for NDA has always been important, but itmore » is recognized that greater rigor is needed and achievable using modern statistical methods.« less

  9. Assessment of model behavior and acceptable forcing data uncertainty in the context of land surface soil moisture estimation

    NASA Astrophysics Data System (ADS)

    Dumedah, Gift; Walker, Jeffrey P.

    2017-03-01

    The sources of uncertainty in land surface models are numerous and varied, from inaccuracies in forcing data to uncertainties in model structure and parameterizations. Majority of these uncertainties are strongly tied to the overall makeup of the model, but the input forcing data set is independent with its accuracy usually defined by the monitoring or the observation system. The impact of input forcing data on model estimation accuracy has been collectively acknowledged to be significant, yet its quantification and the level of uncertainty that is acceptable in the context of the land surface model to obtain a competitive estimation remain mostly unknown. A better understanding is needed about how models respond to input forcing data and what changes in these forcing variables can be accommodated without deteriorating optimal estimation of the model. As a result, this study determines the level of forcing data uncertainty that is acceptable in the Joint UK Land Environment Simulator (JULES) to competitively estimate soil moisture in the Yanco area in south eastern Australia. The study employs hydro genomic mapping to examine the temporal evolution of model decision variables from an archive of values obtained from soil moisture data assimilation. The data assimilation (DA) was undertaken using the advanced Evolutionary Data Assimilation. Our findings show that the input forcing data have significant impact on model output, 35% in root mean square error (RMSE) for 5cm depth of soil moisture and 15% in RMSE for 15cm depth of soil moisture. This specific quantification is crucial to illustrate the significance of input forcing data spread. The acceptable uncertainty determined based on dominant pathway has been validated and shown to be reliable for all forcing variables, so as to provide optimal soil moisture. These findings are crucial for DA in order to account for uncertainties that are meaningful from the model standpoint. Moreover, our results point to a proper treatment of input forcing data in general land surface and hydrological model estimation.

  10. Parameter-induced uncertainty quantification of crop yields, soil N2O and CO2 emission for 8 arable sites across Europe using the LandscapeDNDC model

    NASA Astrophysics Data System (ADS)

    Santabarbara, Ignacio; Haas, Edwin; Kraus, David; Herrera, Saul; Klatt, Steffen; Kiese, Ralf

    2014-05-01

    When using biogeochemical models to estimate greenhouse gas emissions at site to regional/national levels, the assessment and quantification of the uncertainties of simulation results are of significant importance. The uncertainties in simulation results of process-based ecosystem models may result from uncertainties of the process parameters that describe the processes of the model, model structure inadequacy as well as uncertainties in the observations. Data for development and testing of uncertainty analisys were corp yield observations, measurements of soil fluxes of nitrous oxide (N2O) and carbon dioxide (CO2) from 8 arable sites across Europe. Using the process-based biogeochemical model LandscapeDNDC for simulating crop yields, N2O and CO2 emissions, our aim is to assess the simulation uncertainty by setting up a Bayesian framework based on Metropolis-Hastings algorithm. Using Gelman statistics convergence criteria and parallel computing techniques, enable multi Markov Chains to run independently in parallel and create a random walk to estimate the joint model parameter distribution. Through means distribution we limit the parameter space, get probabilities of parameter values and find the complex dependencies among them. With this parameter distribution that determines soil-atmosphere C and N exchange, we are able to obtain the parameter-induced uncertainty of simulation results and compare them with the measurements data.

  11. Uncertainty Quantification using Epi-Splines and Soft Information

    DTIC Science & Technology

    2012-06-01

    use of the Kullback - Leibler divergence measure. The Kullback - Leibler ...to illustrate the application of soft information related to the Kullback - Leibler (KL) divergence discussed in Chapter 2. The idea behind apply- ing... information for the estimation of system performance density functions in order to quantify uncertainty. We conduct empirical testing of

  12. Quantification of uncertainty associated with United States high resolution fossil fuel CO2 emissions: updates, challenges and future plans

    NASA Astrophysics Data System (ADS)

    Gurney, K. R.; Chandrasekaran, V.; Mendoza, D. L.; Geethakumar, S.

    2010-12-01

    The Vulcan Project has estimated United States fossil fuel CO2 emissions at the hourly time scale and at spatial scales below the county level for the year 2002. Vulcan is built from a wide variety of observational data streams including regulated air pollutant emissions reporting, traffic monitoring, energy statistics, and US census data. In addition to these data sets, Vulcan relies on a series of modeling assumptions and constructs to interpolate in space, time and transform non-CO2 reporting into an estimate of CO2 combustion emissions. The recent version 2.0 of the Vulcan inventory has produced advances in a number of categories with particular emphasis on improved temporal structure. Onroad transportation emissions now avail of roughly 5000 automated traffic count monitors allowing for much improved diurnal and weekly time structure in our onroad transportation emissions. Though the inventory shows excellent agreement with independent national-level CO2 emissions estimates, uncertainty quantification has been a challenging task given the large number of data sources and numerous modeling assumptions. However, we have now accomplished a complete uncertainty estimate across all the Vulcan economic sectors and will present uncertainty estimates as a function of space, time, sector and fuel. We find that, like the underlying distribution of CO2 emissions themselves, the uncertainty is also strongly lognormal with high uncertainty associated with a relatively small number of locations. These locations typically are locations reliant upon coal combustion as the dominant CO2 source. We will also compare and contrast Vulcan fossil fuel CO2 emissions estimates against estimates built from DOE fuel-based surveys at the state level. We conclude that much of the difference between the Vulcan inventory and DOE statistics are not due to biased estimation but mechanistic differences in supply versus demand and combustion in space/time.

  13. Learning and Information Approaches for Inference in Dynamic Data-Driven Geophysical Applications

    NASA Astrophysics Data System (ADS)

    Ravela, S.

    2015-12-01

    Many Geophysical inference problems are characterized by non-linear processes, high-dimensional models and complex uncertainties. A dynamic coupling between models, estimation, and sampling is typically sought to efficiently characterize and reduce uncertainty. This process is however fraught with several difficulties. Among them, the key difficulties are the ability to deal with model errors, efficacy of uncertainty quantification and data assimilation. In this presentation, we present three key ideas from learning and intelligent systems theory and apply them to two geophysical applications. The first idea is the use of Ensemble Learning to compensate for model error, the second is to develop tractable Information Theoretic Learning to deal with non-Gaussianity in inference, and the third is a Manifold Resampling technique for effective uncertainty quantification. We apply these methods, first to the development of a cooperative autonomous observing system using sUAS for studying coherent structures. We apply this to Second, we apply this to the problem of quantifying risk from hurricanes and storm surges in a changing climate. Results indicate that learning approaches can enable new effectiveness in cases where standard approaches to model reduction, uncertainty quantification and data assimilation fail.

  14. A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Wu, Keyi; Li, Jinglai

    2016-09-01

    In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods.

  15. Quantification of water resources uncertainties in the Luvuvhu sub-basin of the Limpopo river basin

    NASA Astrophysics Data System (ADS)

    Oosthuizen, N.; Hughes, D.; Kapangaziwiri, E.; Mwenge Kahinda, J.; Mvandaba, V.

    2018-06-01

    In the absence of historical observed data, models are generally used to describe the different hydrological processes and generate data and information that will inform management and policy decision making. Ideally, any hydrological model should be based on a sound conceptual understanding of the processes in the basin and be backed by quantitative information for the parameterization of the model. However, these data are often inadequate in many sub-basins, necessitating the incorporation of the uncertainty related to the estimation process. This paper reports on the impact of the uncertainty related to the parameterization of the Pitman monthly model and water use data on the estimates of the water resources of the Luvuvhu, a sub-basin of the Limpopo river basin. The study reviews existing information sources associated with the quantification of water balance components and gives an update of water resources of the sub-basin. The flows generated by the model at the outlet of the basin were between 44.03 Mm3 and 45.48 Mm3 per month when incorporating +20% uncertainty to the main physical runoff generating parameters. The total predictive uncertainty of the model increased when water use data such as small farm and large reservoirs and irrigation were included. The dam capacity data was considered at an average of 62% uncertainty mainly as a result of the large differences between the available information in the national water resources database and that digitised from satellite imagery. Water used by irrigated crops was estimated with an average of about 50% uncertainty. The mean simulated monthly flows were between 38.57 Mm3 and 54.83 Mm3 after the water use uncertainty was added. However, it is expected that the uncertainty could be reduced by using higher resolution remote sensing imagery.

  16. Analysis of actuator delay and its effect on uncertainty quantification for real-time hybrid simulation

    NASA Astrophysics Data System (ADS)

    Chen, Cheng; Xu, Weijie; Guo, Tong; Chen, Kai

    2017-10-01

    Uncertainties in structure properties can result in different responses in hybrid simulations. Quantification of the effect of these uncertainties would enable researchers to estimate the variances of structural responses observed from experiments. This poses challenges for real-time hybrid simulation (RTHS) due to the existence of actuator delay. Polynomial chaos expansion (PCE) projects the model outputs on a basis of orthogonal stochastic polynomials to account for influences of model uncertainties. In this paper, PCE is utilized to evaluate effect of actuator delay on the maximum displacement from real-time hybrid simulation of a single degree of freedom (SDOF) structure when accounting for uncertainties in structural properties. The PCE is first applied for RTHS without delay to determine the order of PCE, the number of sample points as well as the method for coefficients calculation. The PCE is then applied to RTHS with actuator delay. The mean, variance and Sobol indices are compared and discussed to evaluate the effects of actuator delay on uncertainty quantification for RTHS. Results show that the mean and the variance of the maximum displacement increase linearly and exponentially with respect to actuator delay, respectively. Sensitivity analysis through Sobol indices also indicates the influence of the single random variable decreases while the coupling effect increases with the increase of actuator delay.

  17. Improving Forecasts Through Realistic Uncertainty Estimates: A Novel Data Driven Method for Model Uncertainty Quantification in Data Assimilation

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.

    2016-12-01

    Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.

  18. Reliable estimates of predictive uncertainty for an Alpine catchment using a non-parametric methodology

    NASA Astrophysics Data System (ADS)

    Matos, José P.; Schaefli, Bettina; Schleiss, Anton J.

    2017-04-01

    Uncertainty affects hydrological modelling efforts from the very measurements (or forecasts) that serve as inputs to the more or less inaccurate predictions that are produced. Uncertainty is truly inescapable in hydrology and yet, due to the theoretical and technical hurdles associated with its quantification, it is at times still neglected or estimated only qualitatively. In recent years the scientific community has made a significant effort towards quantifying this hydrologic prediction uncertainty. Despite this, most of the developed methodologies can be computationally demanding, are complex from a theoretical point of view, require substantial expertise to be employed, and are constrained by a number of assumptions about the model error distribution. These assumptions limit the reliability of many methods in case of errors that show particular cases of non-normality, heteroscedasticity, or autocorrelation. The present contribution builds on a non-parametric data-driven approach that was developed for uncertainty quantification in operational (real-time) forecasting settings. The approach is based on the concept of Pareto optimality and can be used as a standalone forecasting tool or as a postprocessor. By virtue of its non-parametric nature and a general operating principle, it can be applied directly and with ease to predictions of streamflow, water stage, or even accumulated runoff. Also, it is a methodology capable of coping with high heteroscedasticity and seasonal hydrological regimes (e.g. snowmelt and rainfall driven events in the same catchment). Finally, the training and operation of the model are very fast, making it a tool particularly adapted to operational use. To illustrate its practical use, the uncertainty quantification method is coupled with a process-based hydrological model to produce statistically reliable forecasts for an Alpine catchment located in Switzerland. Results are presented and discussed in terms of their reliability and resolution.

  19. Rapid Non-Gaussian Uncertainty Quantification of Seismic Velocity Models and Images

    NASA Astrophysics Data System (ADS)

    Ely, G.; Malcolm, A. E.; Poliannikov, O. V.

    2017-12-01

    Conventional seismic imaging typically provides a single estimate of the subsurface without any error bounds. Noise in the observed raw traces as well as the uncertainty of the velocity model directly impact the uncertainty of the final seismic image and its resulting interpretation. We present a Bayesian inference framework to quantify uncertainty in both the velocity model and seismic images, given noise statistics of the observed data.To estimate velocity model uncertainty, we combine the field expansion method, a fast frequency domain wave equation solver, with the adaptive Metropolis-Hastings algorithm. The speed of the field expansion method and its reduced parameterization allows us to perform the tens or hundreds of thousands of forward solves needed for non-parametric posterior estimations. We then migrate the observed data with the distribution of velocity models to generate uncertainty estimates of the resulting subsurface image. This procedure allows us to create both qualitative descriptions of seismic image uncertainty and put error bounds on quantities of interest such as the dip angle of a subduction slab or thickness of a stratigraphic layer.

  20. Inventory Uncertainty Quantification using TENDL Covariance Data in Fispact-II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eastwood, J.W.; Morgan, J.G.; Sublet, J.-Ch., E-mail: jean-christophe.sublet@ccfe.ac.uk

    2015-01-15

    The new inventory code Fispact-II provides predictions of inventory, radiological quantities and their uncertainties using nuclear data covariance information. Central to the method is a novel fast pathways search algorithm using directed graphs. The pathways output provides (1) an aid to identifying important reactions, (2) fast estimates of uncertainties, (3) reduced models that retain important nuclides and reactions for use in the code's Monte Carlo sensitivity analysis module. Described are the methods that are being implemented for improving uncertainty predictions, quantification and propagation using the covariance data that the recent nuclear data libraries contain. In the TENDL library, above themore » upper energy of the resolved resonance range, a Monte Carlo method in which the covariance data come from uncertainties of the nuclear model calculations is used. The nuclear data files are read directly by FISPACT-II without any further intermediate processing. Variance and covariance data are processed and used by FISPACT-II to compute uncertainties in collapsed cross sections, and these are in turn used to predict uncertainties in inventories and all derived radiological data.« less

  1. Using Uncertainty Quantification to Guide Development and Improvements of a Regional-Scale Model of the Coastal Lowlands Aquifer System Spanning Texas, Louisiana, Mississippi, Alabama and Florida

    NASA Astrophysics Data System (ADS)

    Foster, L. K.; Clark, B. R.; Duncan, L. L.; Tebo, D. T.; White, J.

    2017-12-01

    Several historical groundwater models exist within the Coastal Lowlands Aquifer System (CLAS), which spans the Gulf Coastal Plain in Texas, Louisiana, Mississippi, Alabama, and Florida. The largest of these models, called the Gulf Coast Regional Aquifer System Analysis (RASA) model, has been brought into a new framework using the Newton formulation for MODFLOW-2005 (MODFLOW-NWT) and serves as the starting point of a new investigation underway by the U.S. Geological Survey to improve understanding of the CLAS and provide predictions of future groundwater availability within an uncertainty quantification (UQ) framework. The use of an UQ framework will not only provide estimates of water-level observation worth, hydraulic parameter uncertainty, boundary-condition uncertainty, and uncertainty of future potential predictions, but it will also guide the model development process. Traditionally, model development proceeds from dataset construction to the process of deterministic history matching, followed by deterministic predictions using the model. This investigation will combine the use of UQ with existing historical models of the study area to assess in a quantitative framework the effect model package and property improvements have on the ability to represent past-system states, as well as the effect on the model's ability to make certain predictions of water levels, water budgets, and base-flow estimates. Estimates of hydraulic property information and boundary conditions from the existing models and literature, forming the prior, will be used to make initial estimates of model forecasts and their corresponding uncertainty, along with an uncalibrated groundwater model run within an unconstrained Monte Carlo analysis. First-Order Second-Moment (FOSM) analysis will also be used to investigate parameter and predictive uncertainty, and guide next steps in model development prior to rigorous history matching by using PEST++ parameter estimation code.

  2. Uncertainty quantification in Eulerian-Lagrangian models for particle-laden flows

    NASA Astrophysics Data System (ADS)

    Fountoulakis, Vasileios; Jacobs, Gustaaf; Udaykumar, Hs

    2017-11-01

    A common approach to ameliorate the computational burden in simulations of particle-laden flows is to use a point-particle based Eulerian-Lagrangian model, which traces individual particles in their Lagrangian frame and models particles as mathematical points. The particle motion is determined by Stokes drag law, which is empirically corrected for Reynolds number, Mach number and other parameters. The empirical corrections are subject to uncertainty. Treating them as random variables renders the coupled system of PDEs and ODEs stochastic. An approach to quantify the propagation of this parametric uncertainty to the particle solution variables is proposed. The approach is based on averaging of the governing equations and allows for estimation of the first moments of the quantities of interest. We demonstrate the feasibility of our proposed methodology of uncertainty quantification of particle-laden flows on one-dimensional linear and nonlinear Eulerian-Lagrangian systems. This research is supported by AFOSR under Grant FA9550-16-1-0008.

  3. Bayesian Treatment of Uncertainty in Environmental Modeling: Optimization, Sampling and Data Assimilation Using the DREAM Software Package

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2012-12-01

    In the past decade much progress has been made in the treatment of uncertainty in earth systems modeling. Whereas initial approaches has focused mostly on quantification of parameter and predictive uncertainty, recent methods attempt to disentangle the effects of parameter, forcing (input) data, model structural and calibration data errors. In this talk I will highlight some of our recent work involving theory, concepts and applications of Bayesian parameter and/or state estimation. In particular, new methods for sequential Monte Carlo (SMC) and Markov Chain Monte Carlo (MCMC) simulation will be presented with emphasis on massively parallel distributed computing and quantification of model structural errors. The theoretical and numerical developments will be illustrated using model-data synthesis problems in hydrology, hydrogeology and geophysics.

  4. Kalman filter approach for uncertainty quantification in time-resolved laser-induced incandescence.

    PubMed

    Hadwin, Paul J; Sipkens, Timothy A; Thomson, Kevin A; Liu, Fengshan; Daun, Kyle J

    2018-03-01

    Time-resolved laser-induced incandescence (TiRe-LII) data can be used to infer spatially and temporally resolved volume fractions and primary particle size distributions of soot-laden aerosols, but these estimates are corrupted by measurement noise as well as uncertainties in the spectroscopic and heat transfer submodels used to interpret the data. Estimates of the temperature, concentration, and size distribution of soot primary particles within a sample aerosol are typically made by nonlinear regression of modeled spectral incandescence decay, or effective temperature decay, to experimental data. In this work, we employ nonstationary Bayesian estimation techniques to infer aerosol properties from simulated and experimental LII signals, specifically the extended Kalman filter and Schmidt-Kalman filter. These techniques exploit the time-varying nature of both the measurements and the models, and they reveal how uncertainty in the estimates computed from TiRe-LII data evolves over time. Both techniques perform better when compared with standard deterministic estimates; however, we demonstrate that the Schmidt-Kalman filter produces more realistic uncertainty estimates.

  5. Determination of boron in uranium aluminum silicon alloy by spectrophotometry and estimation of expanded uncertainty in measurement

    NASA Astrophysics Data System (ADS)

    Ramanjaneyulu, P. S.; Sayi, Y. S.; Ramakumar, K. L.

    2008-08-01

    Quantification of boron in diverse materials of relevance in nuclear technology is essential in view of its high thermal neutron absorption cross section. A simple and sensitive method has been developed for the determination of boron in uranium-aluminum-silicon alloy, based on leaching of boron with 6 M HCl and H 2O 2, its selective separation by solvent extraction with 2-ethyl hexane 1,3-diol and quantification by spectrophotometry using curcumin. The method has been evaluated by standard addition method and validated by inductively coupled plasma-atomic emission spectroscopy. Relative standard deviation and absolute detection limit of the method are 3.0% (at 1 σ level) and 12 ng, respectively. All possible sources of uncertainties in the methodology have been individually assessed, following the International Organization for Standardization guidelines. The combined uncertainty is calculated employing uncertainty propagation formulae. The expanded uncertainty in the measurement at 95% confidence level (coverage factor 2) is 8.840%.

  6. Time-Resolved Particle Image Velocimetry Measurements with Wall Shear Stress and Uncertainty Quantification for the FDA Nozzle Model.

    PubMed

    Raben, Jaime S; Hariharan, Prasanna; Robinson, Ronald; Malinauskas, Richard; Vlachos, Pavlos P

    2016-03-01

    We present advanced particle image velocimetry (PIV) processing, post-processing, and uncertainty estimation techniques to support the validation of computational fluid dynamics analyses of medical devices. This work is an extension of a previous FDA-sponsored multi-laboratory study, which used a medical device mimicking geometry referred to as the FDA benchmark nozzle model. Experimental measurements were performed using time-resolved PIV at five overlapping regions of the model for Reynolds numbers in the nozzle throat of 500, 2000, 5000, and 8000. Images included a twofold increase in spatial resolution in comparison to the previous study. Data was processed using ensemble correlation, dynamic range enhancement, and phase correlations to increase signal-to-noise ratios and measurement accuracy, and to resolve flow regions with large velocity ranges and gradients, which is typical of many blood-contacting medical devices. Parameters relevant to device safety, including shear stress at the wall and in bulk flow, were computed using radial basis functions. In addition, in-field spatially resolved pressure distributions, Reynolds stresses, and energy dissipation rates were computed from PIV measurements. Velocity measurement uncertainty was estimated directly from the PIV correlation plane, and uncertainty analysis for wall shear stress at each measurement location was performed using a Monte Carlo model. Local velocity uncertainty varied greatly and depended largely on local conditions such as particle seeding, velocity gradients, and particle displacements. Uncertainty in low velocity regions in the sudden expansion section of the nozzle was greatly reduced by over an order of magnitude when dynamic range enhancement was applied. Wall shear stress uncertainty was dominated by uncertainty contributions from velocity estimations, which were shown to account for 90-99% of the total uncertainty. This study provides advancements in the PIV processing methodologies over the previous work through increased PIV image resolution, use of robust image processing algorithms for near-wall velocity measurements and wall shear stress calculations, and uncertainty analyses for both velocity and wall shear stress measurements. The velocity and shear stress analysis, with spatially distributed uncertainty estimates, highlights the challenges of flow quantification in medical devices and provides potential methods to overcome such challenges.

  7. Whole farm quantification of GHG emissions within smallholder farms in developing countries

    NASA Astrophysics Data System (ADS)

    Seebauer, Matthias

    2014-03-01

    The IPCC has compiled the best available scientific methods into published guidelines for estimating greenhouse gas emissions and emission removals from the land-use sector. In order to evaluate existing GHG quantification tools to comprehensively quantify GHG emissions and removals in smallholder conditions, farm scale quantification was tested with farm data from Western Kenya. After conducting a cluster analysis to identify different farm typologies GHG quantification was exercised using the VCS SALM methodology complemented with IPCC livestock emission factors and the cool farm tool. The emission profiles of four farm clusters representing the baseline conditions in the year 2009 are compared with 2011 where farmers adopted sustainable land management practices (SALM). The results demonstrate the variation in both the magnitude of the estimated GHG emissions per ha between different smallholder farm typologies and the emissions estimated by applying two different accounting tools. The farm scale quantification further shows that the adoption of SALM has a significant impact on emission reduction and removals and the mitigation benefits range between 4 and 6.5 tCO2 ha-1 yr-1 with significantly different mitigation benefits depending on typologies of the crop-livestock systems, their different agricultural practices, as well as adoption rates of improved practices. However, the inherent uncertainty related to the emission factors applied by accounting tools has substantial implications for reported agricultural emissions. With regard to uncertainty related to activity data, the assessment confirms the high variability within different farm types as well as between different parameters surveyed to comprehensively quantify GHG emissions within smallholder farms.

  8. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  9. A geostatistical approach for quantification of contaminant mass discharge uncertainty using multilevel sampler measurements

    NASA Astrophysics Data System (ADS)

    Li, K. Betty; Goovaerts, Pierre; Abriola, Linda M.

    2007-06-01

    Contaminant mass discharge across a control plane downstream of a dense nonaqueous phase liquid (DNAPL) source zone has great potential to serve as a metric for the assessment of the effectiveness of source zone treatment technologies and for the development of risk-based source-plume remediation strategies. However, too often the uncertainty of mass discharge estimated in the field is not accounted for in the analysis. In this paper, a geostatistical approach is proposed to estimate mass discharge and to quantify its associated uncertainty using multilevel transect measurements of contaminant concentration (C) and hydraulic conductivity (K). The approach adapts the p-field simulation algorithm to propagate and upscale the uncertainty of mass discharge from the local uncertainty models of C and K. Application of this methodology to numerically simulated transects shows that, with a regular sampling pattern, geostatistics can provide an accurate model of uncertainty for the transects that are associated with low levels of source mass removal (i.e., transects that have a large percentage of contaminated area). For high levels of mass removal (i.e., transects with a few hot spots and large areas of near-zero concentration), a total sampling area equivalent to 6˜7% of the transect is required to achieve accurate uncertainty modeling. A comparison of the results for different measurement supports indicates that samples taken with longer screen lengths may lead to less accurate models of mass discharge uncertainty. The quantification of mass discharge uncertainty, in the form of a probability distribution, will facilitate risk assessment associated with various remediation strategies.

  10. Error correction in multi-fidelity molecular dynamics simulations using functional uncertainty quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reeve, Samuel Temple; Strachan, Alejandro, E-mail: strachan@purdue.edu

    We use functional, Fréchet, derivatives to quantify how thermodynamic outputs of a molecular dynamics (MD) simulation depend on the potential used to compute atomic interactions. Our approach quantifies the sensitivity of the quantities of interest with respect to the input functions as opposed to its parameters as is done in typical uncertainty quantification methods. We show that the functional sensitivity of the average potential energy and pressure in isothermal, isochoric MD simulations using Lennard–Jones two-body interactions can be used to accurately predict those properties for other interatomic potentials (with different functional forms) without re-running the simulations. This is demonstrated undermore » three different thermodynamic conditions, namely a crystal at room temperature, a liquid at ambient pressure, and a high pressure liquid. The method provides accurate predictions as long as the change in potential can be reasonably described to first order and does not significantly affect the region in phase space explored by the simulation. The functional uncertainty quantification approach can be used to estimate the uncertainties associated with constitutive models used in the simulation and to correct predictions if a more accurate representation becomes available.« less

  11. Optimization of monitoring networks based on uncertainty quantification of model predictions of contaminant transport

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Harp, D.

    2010-12-01

    The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.

  12. Detailed Uncertainty Analysis of the ZEM-3 Measurement System

    NASA Technical Reports Server (NTRS)

    Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred

    2014-01-01

    The measurement of Seebeck coefficient and electrical resistivity are critical to the investigation of all thermoelectric systems. Therefore, it stands that the measurement uncertainty must be well understood to report ZT values which are accurate and trustworthy. A detailed uncertainty analysis of the ZEM-3 measurement system has been performed. The uncertainty analysis calculates error in the electrical resistivity measurement as a result of sample geometry tolerance, probe geometry tolerance, statistical error, and multi-meter uncertainty. The uncertainty on Seebeck coefficient includes probe wire correction factors, statistical error, multi-meter uncertainty, and most importantly the cold-finger effect. The cold-finger effect plagues all potentiometric (four-probe) Seebeck measurement systems, as heat parasitically transfers through thermocouple probes. The effect leads to an asymmetric over-estimation of the Seebeck coefficient. A thermal finite element analysis allows for quantification of the phenomenon, and provides an estimate on the uncertainty of the Seebeck coefficient. The thermoelectric power factor has been found to have an uncertainty of +9-14 at high temperature and 9 near room temperature.

  13. Estimating regional greenhouse gas fluxes: An uncertainty analysis of planetary boundary layer techniques and bottom-up inventories

    USDA-ARS?s Scientific Manuscript database

    Quantification of regional greenhouse gas (GHG) fluxes is essential for establishing mitigation strategies and evaluating their effectiveness. Here, we used multiple top-down approaches and multiple trace gas observations at a tall tower to estimate GHG regional fluxes and evaluate the GHG fluxes de...

  14. Comparison of deterministic and stochastic approaches for isotopic concentration and decay heat uncertainty quantification on elementary fission pulse

    NASA Astrophysics Data System (ADS)

    Lahaye, S.; Huynh, T. D.; Tsilanizara, A.

    2016-03-01

    Uncertainty quantification of interest outputs in nuclear fuel cycle is an important issue for nuclear safety, from nuclear facilities to long term deposits. Most of those outputs are functions of the isotopic vector density which is estimated by fuel cycle codes, such as DARWIN/PEPIN2, MENDEL, ORIGEN or FISPACT. CEA code systems DARWIN/PEPIN2 and MENDEL propagate by two different methods the uncertainty from nuclear data inputs to isotopic concentrations and decay heat. This paper shows comparisons between those two codes on a Uranium-235 thermal fission pulse. Effects of nuclear data evaluation's choice (ENDF/B-VII.1, JEFF-3.1.1 and JENDL-2011) is inspected in this paper. All results show good agreement between both codes and methods, ensuring the reliability of both approaches for a given evaluation.

  15. Rational selection of experimental readout and intervention sites for reducing uncertainties in computational model predictions.

    PubMed

    Flassig, Robert J; Migal, Iryna; der Zalm, Esther van; Rihko-Struckmann, Liisa; Sundmacher, Kai

    2015-01-16

    Understanding the dynamics of biological processes can substantially be supported by computational models in the form of nonlinear ordinary differential equations (ODE). Typically, this model class contains many unknown parameters, which are estimated from inadequate and noisy data. Depending on the ODE structure, predictions based on unmeasured states and associated parameters are highly uncertain, even undetermined. For given data, profile likelihood analysis has been proven to be one of the most practically relevant approaches for analyzing the identifiability of an ODE structure, and thus model predictions. In case of highly uncertain or non-identifiable parameters, rational experimental design based on various approaches has shown to significantly reduce parameter uncertainties with minimal amount of effort. In this work we illustrate how to use profile likelihood samples for quantifying the individual contribution of parameter uncertainty to prediction uncertainty. For the uncertainty quantification we introduce the profile likelihood sensitivity (PLS) index. Additionally, for the case of several uncertain parameters, we introduce the PLS entropy to quantify individual contributions to the overall prediction uncertainty. We show how to use these two criteria as an experimental design objective for selecting new, informative readouts in combination with intervention site identification. The characteristics of the proposed multi-criterion objective are illustrated with an in silico example. We further illustrate how an existing practically non-identifiable model for the chlorophyll fluorescence induction in a photosynthetic organism, D. salina, can be rendered identifiable by additional experiments with new readouts. Having data and profile likelihood samples at hand, the here proposed uncertainty quantification based on prediction samples from the profile likelihood provides a simple way for determining individual contributions of parameter uncertainties to uncertainties in model predictions. The uncertainty quantification of specific model predictions allows identifying regions, where model predictions have to be considered with care. Such uncertain regions can be used for a rational experimental design to render initially highly uncertain model predictions into certainty. Finally, our uncertainty quantification directly accounts for parameter interdependencies and parameter sensitivities of the specific prediction.

  16. Quantification of uncertainties for application in detonation simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Miao; Ma, Zhibo

    2016-06-01

    Numerical simulation has become an important means in designing detonation systems, and the quantification of its uncertainty is also necessary to reliability certification. As to quantifying the uncertainty, it is the most important to analyze how the uncertainties occur and develop, and how the simulations develop from benchmark models to new models. Based on the practical needs of engineering and the technology of verification & validation, a framework of QU(quantification of uncertainty) is brought forward in the case that simulation is used on detonation system for scientific prediction. An example is offered to describe the general idea of quantification of simulation uncertainties.

  17. Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghanem, Roger

    QUEST was a SciDAC Institute comprising Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, the Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University. The mission of QUEST is to: (1) develop a broad class of uncertainty quantification (UQ) methods/tools, and (2) provide UQ expertise and software to other SciDAC projects, thereby enabling/guiding their UQ activities. The USC effort centered on the development of reduced models and efficient algorithms for implementing various components of the UQ pipeline. USC personnel were responsible for the development of adaptive bases, adaptive quadrature, and reduced modelsmore » to be used in estimation and inference.« less

  18. Survey of Existing Uncertainty Quantification Capabilities for Army Relevant Problems

    DTIC Science & Technology

    2017-11-27

    ARL-TR-8218•NOV 2017 US Army Research Laboratory Survey of Existing Uncertainty Quantification Capabilities for Army-Relevant Problems by James J...NOV 2017 US Army Research Laboratory Survey of Existing Uncertainty Quantification Capabilities for Army-Relevant Problems by James J Ramsey...Rev. 8/98)    Prescribed by ANSI Std. Z39.18 November 2017 Technical Report Survey of Existing Uncertainty Quantification Capabilities for Army

  19. Development of a Framework for Model-Based Analysis, Uncertainty Quantification, and Robust Control Design of Nonlinear Smart Composite Systems

    DTIC Science & Technology

    2015-06-04

    control, vibration and noise control, health monitoring, and energy harvesting . However, these advantages come at the cost of rate-dependent hysteresis...configuration used for energy harvesting . Uncertainty Quantification Uncertainty quantification is pursued in two steps: (i) determination of densities...Crews and R.C. Smith, “Quantification of parameter and model uncertainty for shape mem- ory alloy bending actuators,” Journal of Intelligent material

  20. An efficient assisted history matching and uncertainty quantification workflow using Gaussian processes proxy models and variogram based sensitivity analysis: GP-VARS

    NASA Astrophysics Data System (ADS)

    Rana, Sachin; Ertekin, Turgay; King, Gregory R.

    2018-05-01

    Reservoir history matching is frequently viewed as an optimization problem which involves minimizing misfit between simulated and observed data. Many gradient and evolutionary strategy based optimization algorithms have been proposed to solve this problem which typically require a large number of numerical simulations to find feasible solutions. Therefore, a new methodology referred to as GP-VARS is proposed in this study which uses forward and inverse Gaussian processes (GP) based proxy models combined with a novel application of variogram analysis of response surface (VARS) based sensitivity analysis to efficiently solve high dimensional history matching problems. Empirical Bayes approach is proposed to optimally train GP proxy models for any given data. The history matching solutions are found via Bayesian optimization (BO) on forward GP models and via predictions of inverse GP model in an iterative manner. An uncertainty quantification method using MCMC sampling in conjunction with GP model is also presented to obtain a probabilistic estimate of reservoir properties and estimated ultimate recovery (EUR). An application of the proposed GP-VARS methodology on PUNQ-S3 reservoir is presented in which it is shown that GP-VARS provides history match solutions in approximately four times less numerical simulations as compared to the differential evolution (DE) algorithm. Furthermore, a comparison of uncertainty quantification results obtained by GP-VARS, EnKF and other previously published methods shows that the P50 estimate of oil EUR obtained by GP-VARS is in close agreement to the true values for the PUNQ-S3 reservoir.

  1. Quantification of LiDAR measurement uncertainty through propagation of errors due to sensor sub-systems and terrain morphology

    NASA Astrophysics Data System (ADS)

    Goulden, T.; Hopkinson, C.

    2013-12-01

    The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future work in LiDAR sensor measurement uncertainty must focus on the development of vegetative error models to create more robust error prediction algorithms. To achieve this objective, comprehensive empirical exploratory analysis is recommended to relate vegetative parameters to observed errors.

  2. Evaluation of the ²³⁹Pu prompt fission neutron spectrum induced by neutrons of 500 keV and associated covariances

    DOE PAGES

    Neudecker, D.; Talou, P.; Kawano, T.; ...

    2015-08-01

    We present evaluations of the prompt fission neutron spectrum (PFNS) of ²³⁹Pu induced by 500 keV neutrons, and associated covariances. In a previous evaluation by Talou et al. 2010, surprisingly low evaluated uncertainties were obtained, partly due to simplifying assumptions in the quantification of uncertainties from experiment and model. Therefore, special emphasis is placed here on a thorough uncertainty quantification of experimental data and of the Los Alamos model predicted values entering the evaluation. In addition, the Los Alamos model was extended and an evaluation technique was employed that takes into account the qualitative differences between normalized model predicted valuesmore » and experimental shape data. These improvements lead to changes in the evaluated PFNS and overall larger evaluated uncertainties than in the previous work. However, these evaluated uncertainties are still smaller than those obtained in a statistical analysis using experimental information only, due to strong model correlations. Hence, suggestions to estimate model defect uncertainties are presented, which lead to more reasonable evaluated uncertainties. The calculated k eff of selected criticality benchmarks obtained with these new evaluations agree with each other within their uncertainties despite the different approaches to estimate model defect uncertainties. The k eff one standard deviations overlap with some of those obtained using ENDF/B-VII.1, albeit their mean values are further away from unity. Spectral indexes for the Jezebel critical assembly calculated with the newly evaluated PFNS agree with the experimental data for selected (n,γ) and (n,f) reactions, and show improvements for high-energy threshold (n,2n) reactions compared to ENDF/B-VII.1.« less

  3. Evaluation of the 239 Pu prompt fission neutron spectrum induced by neutrons of 500 keV and associated covariances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neudecker, D.; Talou, P.; Kawano, T.

    2015-08-01

    We present evaluations of the prompt fission neutron spectrum (PFNS) of (PU)-P-239 induced by 500 keV neutrons, and associated covariances. In a previous evaluation by Talon et al. (2010), surprisingly low evaluated uncertainties were obtained, partly due to simplifying assumptions in the quantification of uncertainties from experiment and model. Therefore, special emphasis is placed here on a thorough uncertainty quantification of experimental data and of the Los Alamos model predicted values entering the evaluation. In addition, the Los Alamos model was extended and an evaluation technique was employed that takes into account the qualitative differences between normalized model predicted valuesmore » and experimental shape data These improvements lead to changes in the evaluated PENS and overall larger evaluated uncertainties than in the previous work. However, these evaluated uncertainties are still smaller than those obtained in a statistical analysis using experimental information only, due to strong model correlations. Hence, suggestions to estimate model defect uncertainties are presented. which lead to more reasonable evaluated uncertainties. The calculated k(eff) of selected criticality benchmarks obtained with these new evaluations agree with each other within their uncertainties despite the different approaches to estimate model defect uncertainties. The k(eff) one standard deviations overlap with some of those obtained using ENDF/B-VILl, albeit their mean values are further away from unity. Spectral indexes for the Jezebel critical assembly calculated with the newly evaluated PFNS agree with the experimental data for selected (n,) and (n,f) reactions, and show improvements for highenergy threshold (n,2n) reactions compared to ENDF/B-VII.l. (C) 2015 Elsevier B.V. All rights reserved.« less

  4. Satellite Re-entry Modeling and Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Horsley, M.

    2012-09-01

    LEO trajectory modeling is a fundamental aerospace capability and has applications in many areas of aerospace, such as maneuver planning, sensor scheduling, re-entry prediction, collision avoidance, risk analysis, and formation flying. Somewhat surprisingly, modeling the trajectory of an object in low Earth orbit is still a challenging task. This is primarily due to the large uncertainty in the upper atmospheric density, about 15-20% (1-sigma) for most thermosphere models. Other contributions come from our inability to precisely model future solar and geomagnetic activities, the potentially unknown shape, material construction and attitude history of the satellite, and intermittent, noisy tracking data. Current methods to predict a satellite's re-entry trajectory typically involve making a single prediction, with the uncertainty dealt with in an ad-hoc manner, usually based on past experience. However, due to the extreme speed of a LEO satellite, even small uncertainties in the re-entry time translate into a very large uncertainty in the location of the re-entry event. Currently, most methods simply update the re-entry estimate on a regular basis. This results in a wide range of estimates that are literally spread over the entire globe. With no understanding of the underlying distribution of potential impact points, the sequence of impact points predicted by the current methodology are largely useless until just a few hours before re-entry. This paper will discuss the development of a set of the High Performance Computing (HPC)-based capabilities to support near real-time quantification of the uncertainty inherent in uncontrolled satellite re-entries. An appropriate management of the uncertainties is essential for a rigorous treatment of the re-entry/LEO trajectory problem. The development of HPC-based tools for re-entry analysis is important as it will allow a rigorous and robust approach to risk assessment by decision makers in an operational setting. Uncertainty quantification results from the recent uncontrolled re-entry of the Phobos-Grunt satellite will be presented and discussed. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  5. The Harm that Underestimation of Uncertainty Does to Our Community: A Case Study Using Sunspot Area Measurements

    NASA Astrophysics Data System (ADS)

    Munoz-Jaramillo, Andres

    2017-08-01

    Data products in heliospheric physics are very often provided without clear estimates of uncertainty. From helioseismology in the solar interior, all the way to in situ solar wind measurements beyond 1AU, uncertainty estimates are typically hard for users to find (buried inside long documents that are separate from the data products), or simply non-existent.There are two main reasons why uncertainty measurements are hard to find:1. Understanding instrumental systematic errors is given a much higher priority inside instrumental teams.2. The desire to perfectly understand all sources of uncertainty postpones indefinitely the actual quantification of uncertainty in our measurements.Using the cross calibration of 200 years of sunspot area measurements as a case study, in this presentation we will discuss the negative impact that inadequate measurements of uncertainty have on users, through the appearance of toxic and unnecessary controversies, and data providers, through the creation of unrealistic expectations regarding the information that can be extracted from their data. We will discuss how empirical estimates of uncertainty represent a very good alternative to not providing any estimates at all, and finalize by discussing the bare essentials that should become our standard practice for future instruments and surveys.

  6. Estimating uncertainty of Full Waveform Inversion with Ensemble-based methods

    NASA Astrophysics Data System (ADS)

    Thurin, J.; Brossier, R.; Métivier, L.

    2017-12-01

    Uncertainty estimation is one key feature of tomographic applications for robust interpretation. However, this information is often missing in the frame of large scale linearized inversions, and only the results at convergence are shown, despite the ill-posed nature of the problem. This issue is common in the Full Waveform Inversion community.While few methodologies have already been proposed in the literature, standard FWI workflows do not include any systematic uncertainty quantifications methods yet, but often try to assess the result's quality through cross-comparison with other results from seismic or comparison with other geophysical data. With the development of large seismic networks/surveys, the increase in computational power and the more and more systematic application of FWI, it is crucial to tackle this problem and to propose robust and affordable workflows, in order to address the uncertainty quantification problem faced for near surface targets, crustal exploration, as well as regional and global scales.In this work (Thurin et al., 2017a,b), we propose an approach which takes advantage of the Ensemble Transform Kalman Filter (ETKF) proposed by Bishop et al., (2001), in order to estimate a low-rank approximation of the posterior covariance matrix of the FWI problem, allowing us to evaluate some uncertainty information of the solution. Instead of solving the FWI problem through a Bayesian inversion with the ETKF, we chose to combine a conventional FWI, based on local optimization, and the ETKF strategies. This scheme allows combining the efficiency of local optimization for solving large scale inverse problems and make the sampling of the local solution space possible thanks to its embarrassingly parallel property. References:Bishop, C. H., Etherton, B. J. and Majumdar, S. J., 2001. Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Monthly weather review, 129(3), 420-436.Thurin, J., Brossier, R. and Métivier, L. 2017,a.: Ensemble-Based Uncertainty Estimation in Full Waveform Inversion. 79th EAGE Conference and Exhibition 2017, (12 - 15 June, 2017)Thurin, J., Brossier, R. and Métivier, L. 2017,b.: An Ensemble-Transform Kalman Filter - Full Waveform Inversion scheme for Uncertainty estimation; SEG Technical Program Expanded Abstracts 2012

  7. Estimating uncertainties in complex joint inverse problems

    NASA Astrophysics Data System (ADS)

    Afonso, Juan Carlos

    2016-04-01

    Sources of uncertainty affecting geophysical inversions can be classified either as reflective (i.e. the practitioner is aware of her/his ignorance) or non-reflective (i.e. the practitioner does not know that she/he does not know!). Although we should be always conscious of the latter, the former are the ones that, in principle, can be estimated either empirically (by making measurements or collecting data) or subjectively (based on the experience of the researchers). For complex parameter estimation problems in geophysics, subjective estimation of uncertainty is the most common type. In this context, probabilistic (aka Bayesian) methods are commonly claimed to offer a natural and realistic platform from which to estimate model uncertainties. This is because in the Bayesian approach, errors (whatever their nature) can be naturally included as part of the global statistical model, the solution of which represents the actual solution to the inverse problem. However, although we agree that probabilistic inversion methods are the most powerful tool for uncertainty estimation, the common claim that they produce "realistic" or "representative" uncertainties is not always justified. Typically, ALL UNCERTAINTY ESTIMATES ARE MODEL DEPENDENT, and therefore, besides a thorough characterization of experimental uncertainties, particular care must be paid to the uncertainty arising from model errors and input uncertainties. We recall here two quotes by G. Box and M. Gunzburger, respectively, of special significance for inversion practitioners and for this session: "…all models are wrong, but some are useful" and "computational results are believed by no one, except the person who wrote the code". In this presentation I will discuss and present examples of some problems associated with the estimation and quantification of uncertainties in complex multi-observable probabilistic inversions, and how to address them. Although the emphasis will be on sources of uncertainty related to the forward and statistical models, I will also address other uncertainties associated with data and uncertainty propagation.

  8. Uncertainty Quantification for Ice Sheet Science and Sea Level Projections

    NASA Astrophysics Data System (ADS)

    Boening, C.; Schlegel, N.; Limonadi, D.; Schodlok, M.; Seroussi, H. L.; Larour, E. Y.; Watkins, M. M.

    2017-12-01

    In order to better quantify uncertainties in global mean sea level rise projections and in particular upper bounds, we aim at systematically evaluating the contributions from ice sheets and potential for extreme sea level rise due to sudden ice mass loss. Here, we take advantage of established uncertainty quantification tools embedded within the Ice Sheet System Model (ISSM) as well as sensitivities to ice/ocean interactions using melt rates and melt potential derived from MITgcm/ECCO2. With the use of these tools, we conduct Monte-Carlo style sampling experiments on forward simulations of the Antarctic ice sheet, by varying internal parameters and boundary conditions of the system over both extreme and credible worst-case ranges. Uncertainty bounds for climate forcing are informed by CMIP5 ensemble precipitation and ice melt estimates for year 2100, and uncertainty bounds for ocean melt rates are derived from a suite of regional sensitivity experiments using MITgcm. Resulting statistics allow us to assess how regional uncertainty in various parameters affect model estimates of century-scale sea level rise projections. The results inform efforts to a) isolate the processes and inputs that are most responsible for determining ice sheet contribution to sea level; b) redefine uncertainty brackets for century-scale projections; and c) provide a prioritized list of measurements, along with quantitative information on spatial and temporal resolution, required for reducing uncertainty in future sea level rise projections. Results indicate that ice sheet mass loss is dependent on the spatial resolution of key boundary conditions - such as bedrock topography and melt rates at the ice-ocean interface. This work is performed at and supported by the California Institute of Technology's Jet Propulsion Laboratory. Supercomputing time is also supported through a contract with the National Aeronautics and Space Administration's Cryosphere program.

  9. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.

  10. The Zugspitze radiative closure experiment for quantifying water vapor absorption over the terrestrial and solar infrared - Part 2: Accurate calibration of high spectral-resolution infrared measurements of surface solar radiation

    NASA Astrophysics Data System (ADS)

    Reichert, Andreas; Rettinger, Markus; Sussmann, Ralf

    2016-09-01

    Quantitative knowledge of water vapor absorption is crucial for accurate climate simulations. An open science question in this context concerns the strength of the water vapor continuum in the near infrared (NIR) at atmospheric temperatures, which is still to be quantified by measurements. This issue can be addressed with radiative closure experiments using solar absorption spectra. However, the spectra used for water vapor continuum quantification have to be radiometrically calibrated. We present for the first time a method that yields sufficient calibration accuracy for NIR water vapor continuum quantification in an atmospheric closure experiment. Our method combines the Langley method with spectral radiance measurements of a high-temperature blackbody calibration source (< 2000 K). The calibration scheme is demonstrated in the spectral range 2500 to 7800 cm-1, but minor modifications to the method enable calibration also throughout the remainder of the NIR spectral range. The resulting uncertainty (2σ) excluding the contribution due to inaccuracies in the extra-atmospheric solar spectrum (ESS) is below 1 % in window regions and up to 1.7 % within absorption bands. The overall radiometric accuracy of the calibration depends on the ESS uncertainty, on which at present no firm consensus has been reached in the NIR. However, as is shown in the companion publication Reichert and Sussmann (2016), ESS uncertainty is only of minor importance for the specific aim of this study, i.e., the quantification of the water vapor continuum in a closure experiment. The calibration uncertainty estimate is substantiated by the investigation of calibration self-consistency, which yields compatible results within the estimated errors for 91.1 % of the 2500 to 7800 cm-1 range. Additionally, a comparison of a set of calibrated spectra to radiative transfer model calculations yields consistent results within the estimated errors for 97.7 % of the spectral range.

  11. Uncertainty quantification of surface-water/groundwater exchange estimates in large wetland systems using Python

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; Metz, P. A.

    2014-12-01

    Most watershed studies include observation-based water budget analyses to develop first-order estimates of significant flow terms. Surface-water/groundwater (SWGW) exchange is typically assumed to be equal to the residual of the sum of inflows and outflows in a watershed. These estimates of SWGW exchange, however, are highly uncertain as a result of the propagation of uncertainty inherent in the calculation or processing of the other terms of the water budget, such as stage-area-volume relations, and uncertainties associated with land-cover based evapotranspiration (ET) rate estimates. Furthermore, the uncertainty of estimated SWGW exchanges can be magnified in large wetland systems that transition from dry to wet during wet periods. Although it is well understood that observation-based estimates of SWGW exchange are uncertain it is uncommon for the uncertainty of these estimates to be directly quantified. High-level programming languages like Python can greatly reduce the effort required to (1) quantify the uncertainty of estimated SWGW exchange in large wetland systems and (2) evaluate how different approaches for partitioning land-cover data in a watershed may affect the water-budget uncertainty. We have used Python with the Numpy, Scipy.stats, and pyDOE packages to implement an unconstrained Monte Carlo approach with Latin Hypercube sampling to quantify the uncertainty of monthly estimates of SWGW exchange in the Floral City watershed of the Tsala Apopka wetland system in west-central Florida, USA. Possible sources of uncertainty in the water budget analysis include rainfall, ET, canal discharge, and land/bathymetric surface elevations. Each of these input variables was assigned a probability distribution based on observation error or spanning the range of probable values. The Monte Carlo integration process exposes the uncertainties in land-cover based ET rate estimates as the dominant contributor to the uncertainty in SWGW exchange estimates. We will discuss the uncertainty of SWGW exchange estimates using an ET model that partitions the watershed into open water and wetland land-cover types. We will also discuss the uncertainty of SWGW exchange estimates calculated using ET models partitioned into additional land-cover types.

  12. The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.

    Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.

  13. The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation

    DOE PAGES

    Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.

    2017-11-27

    Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.

  14. [Application of robustness test for assessment of the measurement uncertainty at the end of development phase of a chromatographic method for quantification of water-soluble vitamins].

    PubMed

    Ihssane, B; Bouchafra, H; El Karbane, M; Azougagh, M; Saffaj, T

    2016-05-01

    We propose in this work an efficient way to evaluate the measurement of uncertainty at the end of the development step of an analytical method, since this assessment provides an indication of the performance of the optimization process. The estimation of the uncertainty is done through a robustness test by applying a Placquett-Burman design, investigating six parameters influencing the simultaneous chromatographic assay of five water-soluble vitamins. The estimated effects of the variation of each parameter are translated into standard uncertainty value at each concentration level. The values obtained of the relative uncertainty do not exceed the acceptance limit of 5%, showing that the procedure development was well done. In addition, a statistical comparison conducted to compare standard uncertainty after the development stage and those of the validation step indicates that the estimated uncertainty are equivalent. The results obtained show clearly the performance and capacity of the chromatographic method to simultaneously assay the five vitamins and suitability for use in routine application. Copyright © 2015 Académie Nationale de Pharmacie. Published by Elsevier Masson SAS. All rights reserved.

  15. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE PAGES

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    2016-01-01

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  16. Forward and inverse uncertainty quantification using multilevel Monte Carlo algorithms for an elliptic non-local equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasra, Ajay; Law, Kody J. H.; Zhou, Yan

    Our paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined,s as is the posterior distribution on parameters given observations. As the elliptic nonlocal equation cannot be solved approximate posteriors are constructed. The multilevel Monte Carlo (MLMC) and multilevel sequential Monte Carlo (MLSMC) sampling algorithms are usedmore » for a priori and a posteriori estimation, respectively, of quantities of interest. Furthermore, these algorithms reduce the amount of work to estimate posterior expectations, for a given level of error, relative to Monte Carlo and i.i.d. sampling from the posterior at a given level of approximation of the solution of the elliptic nonlocal equation.« less

  17. Quantification of variability and uncertainty for air toxic emission inventories with censored emission factor data.

    PubMed

    Frey, H Christopher; Zhao, Yuchao

    2004-11-15

    Probabilistic emission inventories were developed for urban air toxic emissions of benzene, formaldehyde, chromium, and arsenic for the example of Houston. Variability and uncertainty in emission factors were quantified for 71-97% of total emissions, depending upon the pollutant and data availability. Parametric distributions for interunit variability were fit using maximum likelihood estimation (MLE), and uncertainty in mean emission factors was estimated using parametric bootstrap simulation. For data sets containing one or more nondetected values, empirical bootstrap simulation was used to randomly sample detection limits for nondetected values and observations for sample values, and parametric distributions for variability were fit using MLE estimators for censored data. The goodness-of-fit for censored data was evaluated by comparison of cumulative distributions of bootstrap confidence intervals and empirical data. The emission inventory 95% uncertainty ranges are as small as -25% to +42% for chromium to as large as -75% to +224% for arsenic with correlated surrogates. Uncertainty was dominated by only a few source categories. Recommendations are made for future improvements to the analysis.

  18. The development of an hourly gridded rainfall product for hydrological applications in England and Wales

    NASA Astrophysics Data System (ADS)

    Liguori, Sara; O'Loughlin, Fiachra; Souvignet, Maxime; Coxon, Gemma; Freer, Jim; Woods, Ross

    2014-05-01

    This research presents a newly developed observed sub-daily gridded precipitation product for England and Wales. Importantly our analysis specifically allows a quantification of rainfall errors from grid to the catchment scale, useful for hydrological model simulation and the evaluation of prediction uncertainties. Our methodology involves the disaggregation of the current one kilometre daily gridded precipitation records available for the United Kingdom[1]. The hourly product is created using information from: 1) 2000 tipping-bucket rain gauges; and 2) the United Kingdom Met-Office weather radar network. These two independent datasets provide rainfall estimates at temporal resolutions much smaller than the current daily gridded rainfall product; thus allowing the disaggregation of the daily rainfall records to an hourly timestep. Our analysis is conducted for the period 2004 to 2008, limited by the current availability of the datasets. We analyse the uncertainty components affecting the accuracy of this product. Specifically we explore how these uncertainties vary spatially, temporally and with climatic regimes. Preliminary results indicate scope for improvement of hydrological model performance by the utilisation of this new hourly gridded rainfall product. Such product will improve our ability to diagnose and identify structural errors in hydrological modelling by including the quantification of input errors. References [1] Keller V, Young AR, Morris D, Davies H (2006) Continuous Estimation of River Flows. Technical Report: Estimation of Precipitation Inputs. in Agency E (ed.). Environmental Agency.

  19. Uncertainty quantification in application of the enrichment meter principle for nondestructive assay of special nuclear material

    DOE PAGES

    Burr, Tom; Croft, Stephen; Jarman, Kenneth D.

    2015-09-05

    The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed andmore » achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.« less

  20. Intrusive Method for Uncertainty Quantification in a Multiphase Flow Solver

    NASA Astrophysics Data System (ADS)

    Turnquist, Brian; Owkes, Mark

    2016-11-01

    Uncertainty quantification (UQ) is a necessary, interesting, and often neglected aspect of fluid flow simulations. To determine the significance of uncertain initial and boundary conditions, a multiphase flow solver is being created which extends a single phase, intrusive, polynomial chaos scheme into multiphase flows. Reliably estimating the impact of input uncertainty on design criteria can help identify and minimize unwanted variability in critical areas, and has the potential to help advance knowledge in atomizing jets, jet engines, pharmaceuticals, and food processing. Use of an intrusive polynomial chaos method has been shown to significantly reduce computational cost over non-intrusive collocation methods such as Monte-Carlo. This method requires transforming the model equations into a weak form through substitution of stochastic (random) variables. Ultimately, the model deploys a stochastic Navier Stokes equation, a stochastic conservative level set approach including reinitialization, as well as stochastic normals and curvature. By implementing these approaches together in one framework, basic problems may be investigated which shed light on model expansion, uncertainty theory, and fluid flow in general. NSF Grant Number 1511325.

  1. Quantification of model uncertainty in aerosol optical thickness retrieval from Ozone Monitoring Instrument (OMI) measurements

    NASA Astrophysics Data System (ADS)

    Määttä, A.; Laine, M.; Tamminen, J.; Veefkind, J. P.

    2013-09-01

    We study uncertainty quantification in remote sensing of aerosols in the atmosphere with top of the atmosphere reflectance measurements from the nadir-viewing Ozone Monitoring Instrument (OMI). Focus is on the uncertainty in aerosol model selection of pre-calculated aerosol models and on the statistical modelling of the model inadequacies. The aim is to apply statistical methodologies that improve the uncertainty estimates of the aerosol optical thickness (AOT) retrieval by propagating model selection and model error related uncertainties more realistically. We utilise Bayesian model selection and model averaging methods for the model selection problem and use Gaussian processes to model the smooth systematic discrepancies from the modelled to observed reflectance. The systematic model error is learned from an ensemble of operational retrievals. The operational OMI multi-wavelength aerosol retrieval algorithm OMAERO is used for cloud free, over land pixels of the OMI instrument with the additional Bayesian model selection and model discrepancy techniques. The method is demonstrated with four examples with different aerosol properties: weakly absorbing aerosols, forest fires over Greece and Russia, and Sahara dessert dust. The presented statistical methodology is general; it is not restricted to this particular satellite retrieval application.

  2. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    NASA Astrophysics Data System (ADS)

    Schöbi, Roland; Sudret, Bruno

    2017-06-01

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions to surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.

  3. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schöbi, Roland, E-mail: schoebi@ibk.baug.ethz.ch; Sudret, Bruno, E-mail: sudret@ibk.baug.ethz.ch

    2017-06-15

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions tomore » surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.« less

  4. Detailed Uncertainty Analysis of the Ares I A106 Liftoff/Transition Database

    NASA Technical Reports Server (NTRS)

    Hanke, Jeremy L.

    2011-01-01

    The Ares I A106 Liftoff/Transition Force and Moment Aerodynamics Database describes the aerodynamics of the Ares I Crew Launch Vehicle (CLV) from the moment of liftoff through the transition from high to low total angles of attack at low subsonic Mach numbers. The database includes uncertainty estimates that were developed using a detailed uncertainty quantification procedure. The Ares I Aerodynamics Panel developed both the database and the uncertainties from wind tunnel test data acquired in the NASA Langley Research Center s 14- by 22-Foot Subsonic Wind Tunnel Test 591 using a 1.75 percent scale model of the Ares I and the tower assembly. The uncertainty modeling contains three primary uncertainty sources: experimental uncertainty, database modeling uncertainty, and database query interpolation uncertainty. The final database and uncertainty model represent a significant improvement in the quality of the aerodynamic predictions for this regime of flight over the estimates previously used by the Ares Project. The maximum possible aerodynamic force pushing the vehicle towards the launch tower assembly in a dispersed case using this database saw a 40 percent reduction from the worst-case scenario in previously released data for Ares I.

  5. A Framework for Quantifying Measurement Uncertainties and Uncertainty Propagation in HCCI/LTGC Engine Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petitpas, Guillaume; McNenly, Matthew J.; Whitesides, Russell A.

    In this study, a framework for estimating experimental measurement uncertainties for a Homogenous Charge Compression Ignition (HCCI)/Low-Temperature Gasoline Combustion (LTGC) engine testing facility is presented. Detailed uncertainty quantification is first carried out for the measurement of the in-cylinder pressure, whose variations during the cycle provide most of the information for performance evaluation. Standard uncertainties of other measured quantities, such as the engine geometry and speed, the air and fuel flow rate and the intake/exhaust dry molar fractions are also estimated. Propagating those uncertainties using a Monte Carlo simulation and Bayesian inference methods then allows for estimation of uncertainties of themore » mass-average temperature and composition at IVC and throughout the cycle; and also of the engine performances such as gross Integrated Mean Effective Pressure, Heat Release and Ringing Intensity. Throughout the analysis, nominal values for uncertainty inputs were taken from a well-characterized engine test facility. However, the analysis did not take into account the calibration practice of experiments run in that facility and the resulting uncertainty values are therefore not indicative of the expected accuracy of those experimental results. A future study will employ the methodology developed here to explore the effects of different calibration methods on the different uncertainty values in order to evaluate best practices for accurate engine measurements.« less

  6. A Framework for Quantifying Measurement Uncertainties and Uncertainty Propagation in HCCI/LTGC Engine Experiments

    DOE PAGES

    Petitpas, Guillaume; McNenly, Matthew J.; Whitesides, Russell A.

    2017-03-28

    In this study, a framework for estimating experimental measurement uncertainties for a Homogenous Charge Compression Ignition (HCCI)/Low-Temperature Gasoline Combustion (LTGC) engine testing facility is presented. Detailed uncertainty quantification is first carried out for the measurement of the in-cylinder pressure, whose variations during the cycle provide most of the information for performance evaluation. Standard uncertainties of other measured quantities, such as the engine geometry and speed, the air and fuel flow rate and the intake/exhaust dry molar fractions are also estimated. Propagating those uncertainties using a Monte Carlo simulation and Bayesian inference methods then allows for estimation of uncertainties of themore » mass-average temperature and composition at IVC and throughout the cycle; and also of the engine performances such as gross Integrated Mean Effective Pressure, Heat Release and Ringing Intensity. Throughout the analysis, nominal values for uncertainty inputs were taken from a well-characterized engine test facility. However, the analysis did not take into account the calibration practice of experiments run in that facility and the resulting uncertainty values are therefore not indicative of the expected accuracy of those experimental results. A future study will employ the methodology developed here to explore the effects of different calibration methods on the different uncertainty values in order to evaluate best practices for accurate engine measurements.« less

  7. Application of data fusion modeling (DFM) to site characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, D.W.; Gibbs, B.P.; Jones, W.F.

    1996-01-01

    Subsurface characterization is faced with substantial uncertainties because the earth is very heterogeneous, and typical data sets are fragmented and disparate. DFM removes many of the data limitations of current methods to quantify and reduce uncertainty for a variety of data types and models. DFM is a methodology to compute hydrogeological state estimates and their uncertainties from three sources of information: measured data, physical laws, and statistical models for spatial heterogeneities. The benefits of DFM are savings in time and cost through the following: the ability to update models in real time to help guide site assessment, improved quantification ofmore » uncertainty for risk assessment, and improved remedial design by quantifying the uncertainty in safety margins. A Bayesian inverse modeling approach is implemented with a Gauss Newton method where spatial heterogeneities are viewed as Markov random fields. Information from data, physical laws, and Markov models is combined in a Square Root Information Smoother (SRIS). Estimates and uncertainties can be computed for heterogeneous hydraulic conductivity fields in multiple geological layers from the usually sparse hydraulic conductivity data and the often more plentiful head data. An application of DFM to the Old Burial Ground at the DOE Savannah River Site will be presented. DFM estimates and quantifies uncertainty in hydrogeological parameters using variably saturated flow numerical modeling to constrain the estimation. Then uncertainties are propagated through the transport modeling to quantify the uncertainty in tritium breakthrough curves at compliance points.« less

  8. Application of data fusion modeling (DFM) to site characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, D.W.; Gibbs, B.P.; Jones, W.F.

    1996-12-31

    Subsurface characterization is faced with substantial uncertainties because the earth is very heterogeneous, and typical data sets are fragmented and disparate. DFM removes many of the data limitations of current methods to quantify and reduce uncertainty for a variety of data types and models. DFM is a methodology to compute hydrogeological state estimates and their uncertainties from three sources of information: measured data, physical laws, and statistical models for spatial heterogeneities. The benefits of DFM are savings in time and cost through the following: the ability to update models in real time to help guide site assessment, improved quantification ofmore » uncertainty for risk assessment, and improved remedial design by quantifying the uncertainty in safety margins. A Bayesian inverse modeling approach is implemented with a Gauss Newton method where spatial heterogeneities are viewed as Markov random fields. Information from data, physical laws, and Markov models is combined in a Square Root Information Smoother (SRIS). Estimates and uncertainties can be computed for heterogeneous hydraulic conductivity fields in multiple geological layers from the usually sparse hydraulic conductivity data and the often more plentiful head data. An application of DFM to the Old Burial Ground at the DOE Savannah River Site will be presented. DFM estimates and quantifies uncertainty in hydrogeological parameters using variably saturated flow numerical modeling to constrain the estimation. Then uncertainties are propagated through the transport modeling to quantify the uncertainty in tritium breakthrough curves at compliance points.« less

  9. Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat

    2016-01-01

    The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.

  10. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    NASA Astrophysics Data System (ADS)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engel, David W.; Jarman, Kenneth D.; Xu, Zhijie

    This report describes our initial research to quantify uncertainties in the identification and characterization of possible attack states in a network. As a result, we should be able to estimate the current state in which the network is operating, based on a wide variety of network data, and attach a defensible measure of confidence to these state estimates. The output of this research will be new uncertainty quantification (UQ) methods to help develop a process for model development and apply UQ to characterize attacks/adversaries, create an understanding of the degree to which methods scale to "big" data, and offer methodsmore » for addressing model approaches with regard to validation and accuracy.« less

  12. Calibration uncertainty for Advanced LIGO's first and second observing runs

    NASA Astrophysics Data System (ADS)

    Cahillane, Craig; Betzwieser, Joe; Brown, Duncan A.; Goetz, Evan; Hall, Evan D.; Izumi, Kiwamu; Kandhasamy, Shivaraj; Karki, Sudarshan; Kissel, Jeff S.; Mendell, Greg; Savage, Richard L.; Tuyenbayev, Darkhan; Urban, Alex; Viets, Aaron; Wade, Madeline; Weinstein, Alan J.

    2017-11-01

    Calibration of the Advanced LIGO detectors is the quantification of the detectors' response to gravitational waves. Gravitational waves incident on the detectors cause phase shifts in the interferometer laser light which are read out as intensity fluctuations at the detector output. Understanding this detector response to gravitational waves is crucial to producing accurate and precise gravitational wave strain data. Estimates of binary black hole and neutron star parameters and tests of general relativity require well-calibrated data, as miscalibrations will lead to biased results. We describe the method of producing calibration uncertainty estimates for both LIGO detectors in the first and second observing runs.

  13. Improved Uncertainty Quantification in Groundwater Flux Estimation Using GRACE

    NASA Astrophysics Data System (ADS)

    Reager, J. T., II; Rao, P.; Famiglietti, J. S.; Turmon, M.

    2015-12-01

    Groundwater change is difficult to monitor over large scales. One of the most successful approaches is in the remote sensing of time-variable gravity using NASA Gravity Recovery and Climate Experiment (GRACE) mission data, and successful case studies have created the opportunity to move towards a global groundwater monitoring framework for the world's largest aquifers. To achieve these estimates, several approximations are applied, including those in GRACE processing corrections, the formulation of the formal GRACE errors, destriping and signal recovery, and the numerical model estimation of snow water, surface water and soil moisture storage states used to isolate a groundwater component. A major weakness in these approaches is inconsistency: different studies have used different sources of primary and ancillary data, and may achieve different results based on alternative choices in these approximations. In this study, we present two cases of groundwater change estimation in California and the Colorado River basin, selected for their good data availability and varied climates. We achieve a robust numerical estimate of post-processing uncertainties resulting from land-surface model structural shortcomings and model resolution errors. Groundwater variations should demonstrate less variability than the overlying soil moisture state does, as groundwater has a longer memory of past events due to buffering by infiltration and drainage rate limits. We apply a model ensemble approach in a Bayesian framework constrained by the assumption of decreasing signal variability with depth in the soil column. We also discuss time variable errors vs. time constant errors, across-scale errors v. across-model errors, and error spectral content (across scales and across model). More robust uncertainty quantification for GRACE-based groundwater estimates would take all of these issues into account, allowing for more fair use in management applications and for better integration of GRACE-based measurements with observations from other sources.

  14. Estimation of Uncertainties in Stage-Discharge Curve for an Experimental Himalayan Watershed

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Sen, S.

    2016-12-01

    Various water resource projects developed on rivers originating from the Himalayan region, the "Water Tower of Asia", plays an important role on downstream development. Flow measurements at the desired river site are very critical for river engineers and hydrologists for water resources planning and management, flood forecasting, reservoir operation and flood inundation studies. However, an accurate discharge assessment of these mountainous rivers is costly, tedious and frequently dangerous to operators during flood events. Currently, in India, discharge estimation is linked to stage-discharge relationship known as rating curve. This relationship would be affected by a high degree of uncertainty. Estimating the uncertainty of rating curve remains a relevant challenge because it is not easy to parameterize. Main source of rating curve uncertainty are errors because of incorrect discharge measurement, variation in hydraulic conditions and depth measurement. In this study our objective is to obtain best parameters of rating curve that fit the limited record of observations and to estimate uncertainties at different depth obtained from rating curve. The rating curve parameters of standard power law are estimated for three different streams of Aglar watershed located in lesser Himalayas by maximum-likelihood estimator. Quantification of uncertainties in the developed rating curves is obtained from the estimate of variances and covariances of the rating curve parameters. Results showed that the uncertainties varied with catchment behavior with error varies between 0.006-1.831 m3/s. Discharge uncertainty in the Aglar watershed streams significantly depend on the extent of extrapolation outside the range of observed water levels. Extrapolation analysis confirmed that more than 15% for maximum discharges and 5% for minimum discharges are not strongly recommended for these mountainous gauging sites.

  15. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    NASA Astrophysics Data System (ADS)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  16. Quantile-based bias correction and uncertainty quantification of extreme event attribution statements

    DOE PAGES

    Jeon, Soyoung; Paciorek, Christopher J.; Wehner, Michael F.

    2016-02-16

    Extreme event attribution characterizes how anthropogenic climate change may have influenced the probability and magnitude of selected individual extreme weather and climate events. Attribution statements often involve quantification of the fraction of attributable risk (FAR) or the risk ratio (RR) and associated confidence intervals. Many such analyses use climate model output to characterize extreme event behavior with and without anthropogenic influence. However, such climate models may have biases in their representation of extreme events. To account for discrepancies in the probabilities of extreme events between observational datasets and model datasets, we demonstrate an appropriate rescaling of the model output basedmore » on the quantiles of the datasets to estimate an adjusted risk ratio. Our methodology accounts for various components of uncertainty in estimation of the risk ratio. In particular, we present an approach to construct a one-sided confidence interval on the lower bound of the risk ratio when the estimated risk ratio is infinity. We demonstrate the methodology using the summer 2011 central US heatwave and output from the Community Earth System Model. In this example, we find that the lower bound of the risk ratio is relatively insensitive to the magnitude and probability of the actual event.« less

  17. Uncertainty quantification in downscaling procedures for effective decisions in energy systems

    NASA Astrophysics Data System (ADS)

    Constantinescu, E. M.

    2010-12-01

    Weather is a major driver both of energy supply and demand, and with the massive adoption of renewable energy sources and changing economic and producer-consumer paradigms, the management of the next-generation energy systems is becoming ever more challenging. The operational and planning decisions in energy systems are guided by efficiency and reliability, and therefore a central role in these decisions will be played by the ability to obtain weather condition forecasts with accurate uncertainty estimates. The appropriate temporal and spatial resolutions needed for effective decision-making, be it operational or planning, is not clear. It is arguably certain however, that such temporal scales as hourly variations of temperature or wind conditions and ramp events are essential in this process. Planning activities involve decade or decades-long projections of weather. One sensible way to achieve this is to embed regional weather models in a global climate system. This strategy acts as a downscaling procedure. Uncertainty modeling techniques must be developed in order to quantify and minimize forecast errors as well as target variables that impact the decision-making process the most. We discuss the challenges of obtaining a realistic uncertainty quantification estimate using mathematical algorithms based on scalable matrix-free computations and physics-based statistical models. The process of making decisions for energy management systems based on future weather scenarios is a very complex problem. We shall focus on the challenges in generating wind power predictions based on regional weather predictions, and discuss the implications of making the common assumptions about the uncertainty models.

  18. Ensemble-based uncertainty quantification for coordination and control of thermostatically controlled loads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan; Lian, Jianming; Engel, Dave

    2017-07-27

    This paper presents a general uncertainty quantification (UQ) framework that provides a systematic analysis of the uncertainty involved in the modeling of a control system, and helps to improve the performance of a control strategy.

  19. Bayesian Methods for Effective Field Theories

    NASA Astrophysics Data System (ADS)

    Wesolowski, Sarah

    Microscopic predictions of the properties of atomic nuclei have reached a high level of precision in the past decade. This progress mandates improved uncertainty quantification (UQ) for a robust comparison of experiment with theory. With the uncertainty from many-body methods under control, calculations are now sensitive to the input inter-nucleon interactions. These interactions include parameters that must be fit to experiment, inducing both uncertainty from the fit and from missing physics in the operator structure of the Hamiltonian. Furthermore, the implementation of the inter-nucleon interactions is not unique, which presents the additional problem of assessing results using different interactions. Effective field theories (EFTs) take advantage of a separation of high- and low-energy scales in the problem to form a power-counting scheme that allows the organization of terms in the Hamiltonian based on their expected contribution to observable predictions. This scheme gives a natural framework for quantification of uncertainty due to missing physics. The free parameters of the EFT, called the low-energy constants (LECs), must be fit to data, but in a properly constructed EFT these constants will be natural-sized, i.e., of order unity. The constraints provided by the EFT, namely the size of the systematic uncertainty from truncation of the theory and the natural size of the LECs, are assumed information even before a calculation is performed or a fit is done. Bayesian statistical methods provide a framework for treating uncertainties that naturally incorporates prior information as well as putting stochastic and systematic uncertainties on an equal footing. For EFT UQ Bayesian methods allow the relevant EFT properties to be incorporated quantitatively as prior probability distribution functions (pdfs). Following the logic of probability theory, observable quantities and underlying physical parameters such as the EFT breakdown scale may be expressed as pdfs that incorporate the prior pdfs. Problems of model selection, such as distinguishing between competing EFT implementations, are also natural in a Bayesian framework. In this thesis we focus on two complementary topics for EFT UQ using Bayesian methods--quantifying EFT truncation uncertainty and parameter estimation for LECs. Using the order-by-order calculations and underlying EFT constraints as prior information, we show how to estimate EFT truncation uncertainties. We then apply the result to calculating truncation uncertainties on predictions of nucleon-nucleon scattering in chiral effective field theory. We apply model-checking diagnostics to our calculations to ensure that the statistical model of truncation uncertainty produces consistent results. A framework for EFT parameter estimation based on EFT convergence properties and naturalness is developed which includes a series of diagnostics to ensure the extraction of the maximum amount of available information from data to estimate LECs with minimal bias. We develop this framework using model EFTs and apply it to the problem of extrapolating lattice quantum chromodynamics results for the nucleon mass. We then apply aspects of the parameter estimation framework to perform case studies in chiral EFT parameter estimation, investigating a possible operator redundancy at fourth order in the chiral expansion and the appropriate inclusion of truncation uncertainty in estimating LECs.

  20. A Probabilistic Framework for Quantifying Mixed Uncertainties in Cyber Attacker Payoffs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.

    Quantification and propagation of uncertainties in cyber attacker payoffs is a key aspect within multiplayer, stochastic security games. These payoffs may represent penalties or rewards associated with player actions and are subject to various sources of uncertainty, including: (1) cyber-system state, (2) attacker type, (3) choice of player actions, and (4) cyber-system state transitions over time. Past research has primarily focused on representing defender beliefs about attacker payoffs as point utility estimates. More recently, within the physical security domain, attacker payoff uncertainties have been represented as Uniform and Gaussian probability distributions, and mathematical intervals. For cyber-systems, probability distributions may helpmore » address statistical (aleatory) uncertainties where the defender may assume inherent variability or randomness in the factors contributing to the attacker payoffs. However, systematic (epistemic) uncertainties may exist, where the defender may not have sufficient knowledge or there is insufficient information about the attacker’s payoff generation mechanism. Such epistemic uncertainties are more suitably represented as generalizations of probability boxes. This paper explores the mathematical treatment of such mixed payoff uncertainties. A conditional probabilistic reasoning approach is adopted to organize the dependencies between a cyber-system’s state, attacker type, player actions, and state transitions. This also enables the application of probabilistic theories to propagate various uncertainties in the attacker payoffs. An example implementation of this probabilistic framework and resulting attacker payoff distributions are discussed. A goal of this paper is also to highlight this uncertainty quantification problem space to the cyber security research community and encourage further advancements in this area.« less

  1. Advances and Challenges In Uncertainty Quantification with Application to Climate Prediction, ICF design and Science Stockpile Stewardship

    NASA Astrophysics Data System (ADS)

    Klein, R.; Woodward, C. S.; Johannesson, G.; Domyancic, D.; Covey, C. C.; Lucas, D. D.

    2012-12-01

    Uncertainty Quantification (UQ) is a critical field within 21st century simulation science that resides at the very center of the web of emerging predictive capabilities. The science of UQ holds the promise of giving much greater meaning to the results of complex large-scale simulations, allowing for quantifying and bounding uncertainties. This powerful capability will yield new insights into scientific predictions (e.g. Climate) of great impact on both national and international arenas, allow informed decisions on the design of critical experiments (e.g. ICF capsule design, MFE, NE) in many scientific fields, and assign confidence bounds to scientifically predictable outcomes (e.g. nuclear weapons design). In this talk I will discuss a major new strategic initiative (SI) we have developed at Lawrence Livermore National Laboratory to advance the science of Uncertainty Quantification at LLNL focusing in particular on (a) the research and development of new algorithms and methodologies of UQ as applied to multi-physics multi-scale codes, (b) incorporation of these advancements into a global UQ Pipeline (i.e. a computational superstructure) that will simplify user access to sophisticated tools for UQ studies as well as act as a self-guided, self-adapting UQ engine for UQ studies on extreme computing platforms and (c) use laboratory applications as a test bed for new algorithms and methodologies. The initial SI focus has been on applications for the quantification of uncertainty associated with Climate prediction, but the validated UQ methodologies we have developed are now being fed back into Science Based Stockpile Stewardship (SSS) and ICF UQ efforts. To make advancements in several of these UQ grand challenges, I will focus in talk on the following three research areas in our Strategic Initiative: Error Estimation in multi-physics and multi-scale codes ; Tackling the "Curse of High Dimensionality"; and development of an advanced UQ Computational Pipeline to enable complete UQ workflow and analysis for ensemble runs at the extreme scale (e.g. exascale) with self-guiding adaptation in the UQ Pipeline engine. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).

  2. Quantifying the Uncertainty in Discharge Data Using Hydraulic Knowledge and Uncertain Gaugings

    NASA Astrophysics Data System (ADS)

    Renard, B.; Le Coz, J.; Bonnifait, L.; Branger, F.; Le Boursicaud, R.; Horner, I.; Mansanarez, V.; Lang, M.

    2014-12-01

    River discharge is a crucial variable for Hydrology: as the output variable of most hydrologic models, it is used for sensitivity analyses, model structure identification, parameter estimation, data assimilation, prediction, etc. A major difficulty stems from the fact that river discharge is not measured continuously. Instead, discharge time series used by hydrologists are usually based on simple stage-discharge relations (rating curves) calibrated using a set of direct stage-discharge measurements (gaugings). In this presentation, we present a Bayesian approach to build such hydrometric rating curves, to estimate the associated uncertainty and to propagate this uncertainty to discharge time series. The three main steps of this approach are described: (1) Hydraulic analysis: identification of the hydraulic controls that govern the stage-discharge relation, identification of the rating curve equation and specification of prior distributions for the rating curve parameters; (2) Rating curve estimation: Bayesian inference of the rating curve parameters, accounting for the individual uncertainties of available gaugings, which often differ according to the discharge measurement procedure and the flow conditions; (3) Uncertainty propagation: quantification of the uncertainty in discharge time series, accounting for both the rating curve uncertainties and the uncertainty of recorded stage values. In addition, we also discuss current research activities, including the treatment of non-univocal stage-discharge relationships (e.g. due to hydraulic hysteresis, vegetation growth, sudden change of the geometry of the section, etc.).

  3. WE-D-BRE-07: Variance-Based Sensitivity Analysis to Quantify the Impact of Biological Uncertainties in Particle Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamp, F.; Brueningk, S.C.; Wilkens, J.J.

    Purpose: In particle therapy, treatment planning and evaluation are frequently based on biological models to estimate the relative biological effectiveness (RBE) or the equivalent dose in 2 Gy fractions (EQD2). In the context of the linear-quadratic model, these quantities depend on biological parameters (α, β) for ions as well as for the reference radiation and on the dose per fraction. The needed biological parameters as well as their dependency on ion species and ion energy typically are subject to large (relative) uncertainties of up to 20–40% or even more. Therefore it is necessary to estimate the resulting uncertainties in e.g.more » RBE or EQD2 caused by the uncertainties of the relevant input parameters. Methods: We use a variance-based sensitivity analysis (SA) approach, in which uncertainties in input parameters are modeled by random number distributions. The evaluated function is executed 10{sup 4} to 10{sup 6} times, each run with a different set of input parameters, randomly varied according to their assigned distribution. The sensitivity S is a variance-based ranking (from S = 0, no impact, to S = 1, only influential part) of the impact of input uncertainties. The SA approach is implemented for carbon ion treatment plans on 3D patient data, providing information about variations (and their origin) in RBE and EQD2. Results: The quantification enables 3D sensitivity maps, showing dependencies of RBE and EQD2 on different input uncertainties. The high number of runs allows displaying the interplay between different input uncertainties. The SA identifies input parameter combinations which result in extreme deviations of the result and the input parameter for which an uncertainty reduction is the most rewarding. Conclusion: The presented variance-based SA provides advantageous properties in terms of visualization and quantification of (biological) uncertainties and their impact. The method is very flexible, model independent, and enables a broad assessment of uncertainties. Supported by DFG grant WI 3745/1-1 and DFG cluster of excellence: Munich-Centre for Advanced Photonics.« less

  4. Multiscale Structure of UXO Site Characterization: Spatial Estimation and Uncertainty Quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ostrouchov, George; Doll, William E.; Beard, Les P.

    2009-01-01

    Unexploded ordnance (UXO) site characterization must consider both how the contamination is generated and how we observe that contamination. Within the generation and observation processes, dependence structures can be exploited at multiple scales. We describe a conceptual site characterization process, the dependence structures available at several scales, and consider their statistical estimation aspects. It is evident that most of the statistical methods that are needed to address the estimation problems are known but their application-specific implementation may not be available. We demonstrate estimation at one scale and propose a representation for site contamination intensity that takes full account of uncertainty,more » is flexible enough to answer regulatory requirements, and is a practical tool for managing detailed spatial site characterization and remediation. The representation is based on point process spatial estimation methods that require modern computational resources for practical application. These methods have provisions for including prior and covariate information.« less

  5. Analysis of ISO NE Balancing Requirements: Uncertainty-based Secure Ranges for ISO New England Dynamic Inerchange Adjustments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Etingov, Pavel V.; Makarov, Yuri V.; Wu, Di

    The document describes detailed uncertainty quantification (UQ) methodology developed by PNNL to estimate secure ranges of potential dynamic intra-hour interchange adjustments in the ISO-NE system and provides description of the dynamic interchange adjustment (DINA) tool developed under the same contract. The overall system ramping up and down capability, spinning reserve requirements, interchange schedules, load variations and uncertainties from various sources that are relevant to the ISO-NE system are incorporated into the methodology and the tool. The DINA tool has been tested by PNNL and ISO-NE staff engineers using ISO-NE data.

  6. Reduced Order Modeling Methods for Turbomachinery Design

    DTIC Science & Technology

    2009-03-01

    and Ma- terials Conference, May 2006. [45] A. Gelman , J. B. Carlin, H. S. Stern, and D. B. Rubin, Bayesian Data Analysis. New York, NY: Chapman I& Hall...Macian- Juan , and R. Chawla, “A statistical methodology for quantif ca- tion of uncertainty in best estimate code physical models,” Annals of Nuclear En

  7. Inverse modeling of geochemical and mechanical compaction in sedimentary basins

    NASA Astrophysics Data System (ADS)

    Colombo, Ivo; Porta, Giovanni Michele; Guadagnini, Alberto

    2015-04-01

    We study key phenomena driving the feedback between sediment compaction processes and fluid flow in stratified sedimentary basins formed through lithification of sand and clay sediments after deposition. Processes we consider are mechanic compaction of the host rock and the geochemical compaction due to quartz cementation in sandstones. Key objectives of our study include (i) the quantification of the influence of the uncertainty of the model input parameters on the model output and (ii) the application of an inverse modeling technique to field scale data. Proper accounting of the feedback between sediment compaction processes and fluid flow in the subsurface is key to quantify a wide set of environmentally and industrially relevant phenomena. These include, e.g., compaction-driven brine and/or saltwater flow at deep locations and its influence on (a) tracer concentrations observed in shallow sediments, (b) build up of fluid overpressure, (c) hydrocarbon generation and migration, (d) subsidence due to groundwater and/or hydrocarbons withdrawal, and (e) formation of ore deposits. Main processes driving the diagenesis of sediments after deposition are mechanical compaction due to overburden and precipitation/dissolution associated with reactive transport. The natural evolution of sedimentary basins is characterized by geological time scales, thus preventing direct and exhaustive measurement of the system dynamical changes. The outputs of compaction models are plagued by uncertainty because of the incomplete knowledge of the models and parameters governing diagenesis. Development of robust methodologies for inverse modeling and parameter estimation under uncertainty is therefore crucial to the quantification of natural compaction phenomena. We employ a numerical methodology based on three building blocks: (i) space-time discretization of the compaction process; (ii) representation of target output variables through a Polynomial Chaos Expansion (PCE); and (iii) model inversion (parameter estimation) within a maximum likelihood framework. In this context, the PCE-based surrogate model enables one to (i) minimize the computational cost associated with the (forward and inverse) modeling procedures leading to uncertainty quantification and parameter estimation, and (ii) compute the full set of Sobol indices quantifying the contribution of each uncertain parameter to the variability of target state variables. Results are illustrated through the simulation of one-dimensional test cases. The analyses focuses on the calibration of model parameters through literature field cases. The quality of parameter estimates is then analyzed as a function of number, type and location of data.

  8. Model Uncertainty Quantification Methods For Data Assimilation In Partially Observed Multi-Scale Systems

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; van Leeuwen, P. J.

    2017-12-01

    Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.

  9. Uncertainty Quantification of Medium-Term Heat Storage From Short-Term Geophysical Experiments Using Bayesian Evidential Learning

    NASA Astrophysics Data System (ADS)

    Hermans, Thomas; Nguyen, Frédéric; Klepikova, Maria; Dassargues, Alain; Caers, Jef

    2018-04-01

    In theory, aquifer thermal energy storage (ATES) systems can recover in winter the heat stored in the aquifer during summer to increase the energy efficiency of the system. In practice, the energy efficiency is often lower than expected from simulations due to spatial heterogeneity of hydraulic properties or non-favorable hydrogeological conditions. A proper design of ATES systems should therefore consider the uncertainty of the prediction related to those parameters. We use a novel framework called Bayesian Evidential Learning (BEL) to estimate the heat storage capacity of an alluvial aquifer using a heat tracing experiment. BEL is based on two main stages: pre- and postfield data acquisition. Before data acquisition, Monte Carlo simulations and global sensitivity analysis are used to assess the information content of the data to reduce the uncertainty of the prediction. After data acquisition, prior falsification and machine learning based on the same Monte Carlo are used to directly assess uncertainty on key prediction variables from observations. The result is a full quantification of the posterior distribution of the prediction conditioned to observed data, without any explicit full model inversion. We demonstrate the methodology in field conditions and validate the framework using independent measurements.

  10. Uncertainty Estimation Improves Energy Measurement and Verification Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walter, Travis; Price, Phillip N.; Sohn, Michael D.

    2014-05-14

    Implementing energy conservation measures in buildings can reduce energy costs and environmental impacts, but such measures cost money to implement so intelligent investment strategies require the ability to quantify the energy savings by comparing actual energy used to how much energy would have been used in absence of the conservation measures (known as the baseline energy use). Methods exist for predicting baseline energy use, but a limitation of most statistical methods reported in the literature is inadequate quantification of the uncertainty in baseline energy use predictions. However, estimation of uncertainty is essential for weighing the risks of investing in retrofits.more » Most commercial buildings have, or soon will have, electricity meters capable of providing data at short time intervals. These data provide new opportunities to quantify uncertainty in baseline predictions, and to do so after shorter measurement durations than are traditionally used. In this paper, we show that uncertainty estimation provides greater measurement and verification (M&V) information and helps to overcome some of the difficulties with deciding how much data is needed to develop baseline models and to confirm energy savings. We also show that cross-validation is an effective method for computing uncertainty. In so doing, we extend a simple regression-based method of predicting energy use using short-interval meter data. We demonstrate the methods by predicting energy use in 17 real commercial buildings. We discuss the benefits of uncertainty estimates which can provide actionable decision making information for investing in energy conservation measures.« less

  11. A Bayes network approach to uncertainty quantification in hierarchically developed computational models

    DOE PAGES

    Urbina, Angel; Mahadevan, Sankaran; Paez, Thomas L.

    2012-03-01

    Here, performance assessment of complex systems is ideally accomplished through system-level testing, but because they are expensive, such tests are seldom performed. On the other hand, for economic reasons, data from tests on individual components that are parts of complex systems are more readily available. The lack of system-level data leads to a need to build computational models of systems and use them for performance prediction in lieu of experiments. Because their complexity, models are sometimes built in a hierarchical manner, starting with simple components, progressing to collections of components, and finally, to the full system. Quantification of uncertainty inmore » the predicted response of a system model is required in order to establish confidence in the representation of actual system behavior. This paper proposes a framework for the complex, but very practical problem of quantification of uncertainty in system-level model predictions. It is based on Bayes networks and uses the available data at multiple levels of complexity (i.e., components, subsystem, etc.). Because epistemic sources of uncertainty were shown to be secondary, in this application, aleatoric only uncertainty is included in the present uncertainty quantification. An example showing application of the techniques to uncertainty quantification of measures of response of a real, complex aerospace system is included.« less

  12. Propagation of stage measurement uncertainties to streamflow time series

    NASA Astrophysics Data System (ADS)

    Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary

    2016-04-01

    Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.

  13. Modelling Freshwater Resources at the Global Scale: Challenges and Prospects

    NASA Technical Reports Server (NTRS)

    Doll, Petra; Douville, Herve; Guntner, Andreas; Schmied, Hannes Muller; Wada, Yoshihide

    2015-01-01

    Quantification of spatially and temporally resolved water flows and water storage variations for all land areas of the globe is required to assess water resources, water scarcity and flood hazards, and to understand the Earth system. This quantification is done with the help of global hydrological models (GHMs). What are the challenges and prospects in the development and application of GHMs? Seven important challenges are presented. (1) Data scarcity makes quantification of human water use difficult even though significant progress has been achieved in the last decade. (2) Uncertainty of meteorological input data strongly affects model outputs. (3) The reaction of vegetation to changing climate and CO2 concentrations is uncertain and not taken into account in most GHMs that serve to estimate climate change impacts. (4) Reasons for discrepant responses of GHMs to changing climate have yet to be identified. (5) More accurate estimates of monthly time series of water availability and use are needed to provide good indicators of water scarcity. (6) Integration of gradient-based groundwater modelling into GHMs is necessary for a better simulation of groundwater-surface water interactions and capillary rise. (7) Detection and attribution of human interference with freshwater systems by using GHMs are constrained by data of insufficient quality but also GHM uncertainty itself. Regarding prospects for progress, we propose to decrease the uncertainty of GHM output by making better use of in situ and remotely sensed observations of output variables such as river discharge or total water storage variations by multi-criteria validation, calibration or data assimilation. Finally, we present an initiative that works towards the vision of hyper resolution global hydrological modelling where GHM outputs would be provided at a 1-km resolution with reasonable accuracy.

  14. Uncertainty Quantification of GEOS-5 L-band Radiative Transfer Model Parameters Using Bayesian Inference and SMOS Observations

    NASA Technical Reports Server (NTRS)

    DeLannoy, Gabrielle J. M.; Reichle, Rolf H.; Vrugt, Jasper A.

    2013-01-01

    Uncertainties in L-band (1.4 GHz) radiative transfer modeling (RTM) affect the simulation of brightness temperatures (Tb) over land and the inversion of satellite-observed Tb into soil moisture retrievals. In particular, accurate estimates of the microwave soil roughness, vegetation opacity and scattering albedo for large-scale applications are difficult to obtain from field studies and often lack an uncertainty estimate. Here, a Markov Chain Monte Carlo (MCMC) simulation method is used to determine satellite-scale estimates of RTM parameters and their posterior uncertainty by minimizing the misfit between long-term averages and standard deviations of simulated and observed Tb at a range of incidence angles, at horizontal and vertical polarization, and for morning and evening overpasses. Tb simulations are generated with the Goddard Earth Observing System (GEOS-5) and confronted with Tb observations from the Soil Moisture Ocean Salinity (SMOS) mission. The MCMC algorithm suggests that the relative uncertainty of the RTM parameter estimates is typically less than 25 of the maximum a posteriori density (MAP) parameter value. Furthermore, the actual root-mean-square-differences in long-term Tb averages and standard deviations are found consistent with the respective estimated total simulation and observation error standard deviations of m3.1K and s2.4K. It is also shown that the MAP parameter values estimated through MCMC simulation are in close agreement with those obtained with Particle Swarm Optimization (PSO).

  15. Uncertainties in estimates of the risks of late effects from space radiation

    NASA Astrophysics Data System (ADS)

    Cucinotta, F. A.; Schimmerling, W.; Wilson, J. W.; Peterson, L. E.; Saganti, P. B.; Dicello, J. F.

    2004-01-01

    Methods used to project risks in low-Earth orbit are of questionable merit for exploration missions because of the limited radiobiology data and knowledge of galactic cosmic ray (GCR) heavy ions, which causes estimates of the risk of late effects to be highly uncertain. Risk projections involve a product of many biological and physical factors, each of which has a differential range of uncertainty due to lack of data and knowledge. Using the linear-additivity model for radiation risks, we use Monte-Carlo sampling from subjective uncertainty distributions in each factor to obtain an estimate of the overall uncertainty in risk projections. The resulting methodology is applied to several human space exploration mission scenarios including a deep space outpost and Mars missions of duration of 360, 660, and 1000 days. The major results are the quantification of the uncertainties in current risk estimates, the identification of factors that dominate risk projection uncertainties, and the development of a method to quantify candidate approaches to reduce uncertainties or mitigate risks. The large uncertainties in GCR risk projections lead to probability distributions of risk that mask any potential risk reduction using the "optimization" of shielding materials or configurations. In contrast, the design of shielding optimization approaches for solar particle events and trapped protons can be made at this time and promising technologies can be shown to have merit using our approach. The methods used also make it possible to express risk management objectives in terms of quantitative metrics, e.g., the number of days in space without exceeding a given risk level within well-defined confidence limits.

  16. Uncertainty Quantification of CFD Data Generated for a Model Scramjet Isolator Flowfield

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Axdahl, E. L.

    2017-01-01

    Computational fluid dynamics is now considered to be an indispensable tool for the design and development of scramjet engine components. Unfortunately, the quantification of uncertainties is rarely addressed with anything other than sensitivity studies, so the degree of confidence associated with the numerical results remains exclusively with the subject matter expert that generated them. This practice must be replaced with a formal uncertainty quantification process for computational fluid dynamics to play an expanded role in the system design, development, and flight certification process. Given the limitations of current hypersonic ground test facilities, this expanded role is believed to be a requirement by some in the hypersonics community if scramjet engines are to be given serious consideration as a viable propulsion system. The present effort describes a simple, relatively low cost, nonintrusive approach to uncertainty quantification that includes the basic ingredients required to handle both aleatoric (random) and epistemic (lack of knowledge) sources of uncertainty. The nonintrusive nature of the approach allows the computational fluid dynamicist to perform the uncertainty quantification with the flow solver treated as a "black box". Moreover, a large fraction of the process can be automated, allowing the uncertainty assessment to be readily adapted into the engineering design and development workflow. In the present work, the approach is applied to a model scramjet isolator problem where the desire is to validate turbulence closure models in the presence of uncertainty. In this context, the relevant uncertainty sources are determined and accounted for to allow the analyst to delineate turbulence model-form errors from other sources of uncertainty associated with the simulation of the facility flow.

  17. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Crevillén-García, D.; Power, H.

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  18. A machine learning approach for efficient uncertainty quantification using multiscale methods

    NASA Astrophysics Data System (ADS)

    Chan, Shing; Elsheikh, Ahmed H.

    2018-02-01

    Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.

  19. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media.

    PubMed

    Crevillén-García, D; Power, H

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  20. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    PubMed Central

    Power, H.

    2017-01-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen–Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error. PMID:28878974

  1. Atmospheric CO2 inversions on the mesoscale using data-driven prior uncertainties: quantification of the European terrestrial CO2 fluxes

    NASA Astrophysics Data System (ADS)

    Kountouris, Panagiotis; Gerbig, Christoph; Rödenbeck, Christian; Karstens, Ute; Koch, Thomas F.; Heimann, Martin

    2018-03-01

    Optimized biogenic carbon fluxes for Europe were estimated from high-resolution regional-scale inversions, utilizing atmospheric CO2 measurements at 16 stations for the year 2007. Additional sensitivity tests with different data-driven error structures were performed. As the atmospheric network is rather sparse and consequently contains large spatial gaps, we use a priori biospheric fluxes to further constrain the inversions. The biospheric fluxes were simulated by the Vegetation Photosynthesis and Respiration Model (VPRM) at a resolution of 0.1° and optimized against eddy covariance data. Overall we estimate an a priori uncertainty of 0.54 GtC yr-1 related to the poor spatial representation between the biospheric model and the ecosystem sites. The sink estimated from the atmospheric inversions for the area of Europe (as represented in the model domain) ranges between 0.23 and 0.38 GtC yr-1 (0.39 and 0.71 GtC yr-1 up-scaled to geographical Europe). This is within the range of posterior flux uncertainty estimates of previous studies using ground-based observations.

  2. Uncertainty quantification in nanomechanical measurements using the atomic force microscope

    Treesearch

    Ryan Wagner; Robert Moon; Jon Pratt; Gordon Shaw; Arvind Raman

    2011-01-01

    Quantifying uncertainty in measured properties of nanomaterials is a prerequisite for the manufacture of reliable nanoengineered materials and products. Yet, rigorous uncertainty quantification (UQ) is rarely applied for material property measurements with the atomic force microscope (AFM), a widely used instrument that can measure properties at nanometer scale...

  3. Uncertainty assessment of a model for biological nitrogen and phosphorus removal: Application to a large wastewater treatment plant

    NASA Astrophysics Data System (ADS)

    Mannina, Giorgio; Cosenza, Alida; Viviani, Gaspare

    In the last few years, the use of mathematical models in WasteWater Treatment Plant (WWTP) processes has become a common way to predict WWTP behaviour. However, mathematical models generally demand advanced input for their implementation that must be evaluated by an extensive data-gathering campaign, which cannot always be carried out. This fact, together with the intrinsic complexity of the model structure, leads to model results that may be very uncertain. Quantification of the uncertainty is imperative. However, despite the importance of uncertainty quantification, only few studies have been carried out in the wastewater treatment field, and those studies only included a few of the sources of model uncertainty. Seeking the development of the area, the paper presents the uncertainty assessment of a mathematical model simulating biological nitrogen and phosphorus removal. The uncertainty assessment was conducted according to the Generalised Likelihood Uncertainty Estimation (GLUE) methodology that has been scarcely applied in wastewater field. The model was based on activated-sludge models 1 (ASM) and 2 (ASM2). Different approaches can be used for uncertainty analysis. The GLUE methodology requires a large number of Monte Carlo simulations in which a random sampling of individual parameters drawn from probability distributions is used to determine a set of parameter values. Using this approach, model reliability was evaluated based on its capacity to globally limit the uncertainty. The method was applied to a large full-scale WWTP for which quantity and quality data was gathered. The analysis enabled to gain useful insights for WWTP modelling identifying the crucial aspects where higher uncertainty rely and where therefore, more efforts should be provided in terms of both data gathering and modelling practises.

  4. PIV Uncertainty Methodologies for CFD Code Validation at the MIR Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabharwall, Piyush; Skifton, Richard; Stoots, Carl

    2013-12-01

    Currently, computational fluid dynamics (CFD) is widely used in the nuclear thermal hydraulics field for design and safety analyses. To validate CFD codes, high quality multi dimensional flow field data are essential. The Matched Index of Refraction (MIR) Flow Facility at Idaho National Laboratory has a unique capability to contribute to the development of validated CFD codes through the use of Particle Image Velocimetry (PIV). The significance of the MIR facility is that it permits non intrusive velocity measurement techniques, such as PIV, through complex models without requiring probes and other instrumentation that disturb the flow. At the heart ofmore » any PIV calculation is the cross-correlation, which is used to estimate the displacement of particles in some small part of the image over the time span between two images. This image displacement is indicated by the location of the largest peak. In the MIR facility, uncertainty quantification is a challenging task due to the use of optical measurement techniques. Currently, this study is developing a reliable method to analyze uncertainty and sensitivity of the measured data and develop a computer code to automatically analyze the uncertainty/sensitivity of the measured data. The main objective of this study is to develop a well established uncertainty quantification method for the MIR Flow Facility, which consists of many complicated uncertainty factors. In this study, the uncertainty sources are resolved in depth by categorizing them into uncertainties from the MIR flow loop and PIV system (including particle motion, image distortion, and data processing). Then, each uncertainty source is mathematically modeled or adequately defined. Finally, this study will provide a method and procedure to quantify the experimental uncertainty in the MIR Flow Facility with sample test results.« less

  5. Uncertainty Quantification Techniques for Population Density Estimates Derived from Sparse Open Source Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Robert N; White, Devin A; Urban, Marie L

    2013-01-01

    The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort whichmore » considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.« less

  6. Confidence in outcome estimates from systematic reviews used in informed consent.

    PubMed

    Fritz, Robert; Bauer, Janet G; Spackman, Sue S; Bains, Amanjyot K; Jetton-Rangel, Jeanette

    2016-12-01

    Evidence-based dentistry now guides informed consent in which clinicians are obliged to provide patients with the most current, best evidence, or best estimates of outcomes, of regimens, therapies, treatments, procedures, materials, and equipment or devices when developing personal oral health care, treatment plans. Yet, clinicians require that the estimates provided from systematic reviews be verified to their validity, reliability, and contextualized as to performance competency so that clinicians may have confidence in explaining outcomes to patients in clinical practice. The purpose of this paper was to describe types of informed estimates from which clinicians may have confidence in their capacity to assist patients in competent decision-making, one of the most important concepts of informed consent. Using systematic review methodology, researchers provide clinicians with valid best estimates of outcomes regarding a subject of interest from best evidence. Best evidence is verified through critical appraisals using acceptable sampling methodology either by scoring instruments (Timmer analysis) or checklist (grade), a Cochrane Collaboration standard that allows transparency in open reviews. These valid best estimates are then tested for reliability using large databases. Finally, valid and reliable best estimates are assessed for meaning using quantification of margins and uncertainties. Through manufacturer and researcher specifications, quantification of margins and uncertainties develops a performance competency continuum by which valid, reliable best estimates may be contextualized for their performance competency: at a lowest margin performance competency (structural failure), high margin performance competency (estimated true value of success), or clinically determined critical values (clinical failure). Informed consent may be achieved when clinicians are confident of their ability to provide useful and accurate best estimates of outcomes regarding regimens, therapies, treatments, and equipment or devices to patients in their clinical practices and when developing personal, oral health care, treatment plans. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Using nonlinear least squares to assess relative expression and its uncertainty in real-time qPCR studies.

    PubMed

    Tellinghuisen, Joel

    2016-03-01

    Relative expression ratios are commonly estimated in real-time qPCR studies by comparing the quantification cycle for the target gene with that for a reference gene in the treatment samples, normalized to the same quantities determined for a control sample. For the "standard curve" design, where data are obtained for all four of these at several dilutions, nonlinear least squares can be used to assess the amplification efficiencies (AE) and the adjusted ΔΔCq and its uncertainty, with automatic inclusion of the effect of uncertainty in the AEs. An algorithm is illustrated for the KaleidaGraph program. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Uncertainty quantification of overpressure buildup through inverse modeling of compaction processes in sedimentary basins

    NASA Astrophysics Data System (ADS)

    Colombo, Ivo; Porta, Giovanni M.; Ruffo, Paolo; Guadagnini, Alberto

    2017-03-01

    This study illustrates a procedure conducive to a preliminary risk analysis of overpressure development in sedimentary basins characterized by alternating depositional events of sandstone and shale layers. The approach rests on two key elements: (1) forward modeling of fluid flow and compaction, and (2) application of a model-complexity reduction technique based on a generalized polynomial chaos expansion (gPCE). The forward model considers a one-dimensional vertical compaction processes. The gPCE model is then used in an inverse modeling context to obtain efficient model parameter estimation and uncertainty quantification. The methodology is applied to two field settings considered in previous literature works, i.e. the Venture Field (Scotian Shelf, Canada) and the Navarin Basin (Bering Sea, Alaska, USA), relying on available porosity and pressure information for model calibration. It is found that the best result is obtained when porosity and pressure data are considered jointly in the model calibration procedure. Uncertainty propagation from unknown input parameters to model outputs, such as pore pressure vertical distribution, is investigated and quantified. This modeling strategy enables one to quantify the relative importance of key phenomena governing the feedback between sediment compaction and fluid flow processes and driving the buildup of fluid overpressure in stratified sedimentary basins characterized by the presence of low-permeability layers. The results here illustrated (1) allow for diagnosis of the critical role played by the parameters of quantitative formulations linking porosity and permeability in compacted shales and (2) provide an explicit and detailed quantification of the effects of their uncertainty in field settings.

  9. The need for precise and well-documented experimental data on prompt fission neutron spectra from neutron-induced fission of 239Pu

    DOE PAGES

    Neudecker, Denise; Taddeucci, Terry Nicholas; Haight, Robert Cameron; ...

    2016-01-06

    The spectrum of neutrons emitted promptly after 239Pu(n,f)—a so-called prompt fission neutron spectrum (PFNS)—is a quantity of high interest, for instance, for reactor physics and global security. However, there are only few experimental data sets available that are suitable for evaluations. In addition, some of those data sets differ by more than their 1-σ uncertainty boundaries. We present the results of MCNP studies indicating that these differences are partly caused by underestimated multiple scattering contributions, over-corrected background, and inconsistent deconvolution methods. A detailed uncertainty quantification for suitable experimental data was undertaken including these effects, and test-evaluations were performed with themore » improved uncertainty information. The test-evaluations illustrate that the inadequately estimated effects and detailed uncertainty quantification have an impact on the evaluated PFNS and associated uncertainties as well as the neutron multiplicity of selected critical assemblies. A summary of data and documentation needs to improve the quality of the experimental database is provided based on the results of simulations and test-evaluations. Furthermore, given the possibly substantial distortion of the PFNS by multiple scattering and background effects, special care should be taken to reduce these effects in future measurements, e.g., by measuring the 239Pu PFNS as a ratio to either the 235U or 252Cf PFNS.« less

  10. Advanced Modeling and Uncertainty Quantification for Flight Dynamics; Interim Results and Challenges

    NASA Technical Reports Server (NTRS)

    Hyde, David C.; Shweyk, Kamal M.; Brown, Frank; Shah, Gautam

    2014-01-01

    As part of the NASA Vehicle Systems Safety Technologies (VSST), Assuring Safe and Effective Aircraft Control Under Hazardous Conditions (Technical Challenge #3), an effort is underway within Boeing Research and Technology (BR&T) to address Advanced Modeling and Uncertainty Quantification for Flight Dynamics (VSST1-7). The scope of the effort is to develop and evaluate advanced multidisciplinary flight dynamics modeling techniques, including integrated uncertainties, to facilitate higher fidelity response characterization of current and future aircraft configurations approaching and during loss-of-control conditions. This approach is to incorporate multiple flight dynamics modeling methods for aerodynamics, structures, and propulsion, including experimental, computational, and analytical. Also to be included are techniques for data integration and uncertainty characterization and quantification. This research shall introduce new and updated multidisciplinary modeling and simulation technologies designed to improve the ability to characterize airplane response in off-nominal flight conditions. The research shall also introduce new techniques for uncertainty modeling that will provide a unified database model comprised of multiple sources, as well as an uncertainty bounds database for each data source such that a full vehicle uncertainty analysis is possible even when approaching or beyond Loss of Control boundaries. Methodologies developed as part of this research shall be instrumental in predicting and mitigating loss of control precursors and events directly linked to causal and contributing factors, such as stall, failures, damage, or icing. The tasks will include utilizing the BR&T Water Tunnel to collect static and dynamic data to be compared to the GTM extended WT database, characterizing flight dynamics in off-nominal conditions, developing tools for structural load estimation under dynamic conditions, devising methods for integrating various modeling elements into a real-time simulation capability, generating techniques for uncertainty modeling that draw data from multiple modeling sources, and providing a unified database model that includes nominal plus increments for each flight condition. This paper presents status of testing in the BR&T water tunnel and analysis of the resulting data and efforts to characterize these data using alternative modeling methods. Program challenges and issues are also presented.

  11. Bayesian uncertainty quantification in linear models for diffusion MRI.

    PubMed

    Sjölund, Jens; Eklund, Anders; Özarslan, Evren; Herberthson, Magnus; Bånkestad, Maria; Knutsson, Hans

    2018-03-29

    Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue microstructure. By fitting a model to the dMRI signal it is possible to derive various quantitative features. Several of the most popular dMRI signal models are expansions in an appropriately chosen basis, where the coefficients are determined using some variation of least-squares. However, such approaches lack any notion of uncertainty, which could be valuable in e.g. group analyses. In this work, we use a probabilistic interpretation of linear least-squares methods to recast popular dMRI models as Bayesian ones. This makes it possible to quantify the uncertainty of any derived quantity. In particular, for quantities that are affine functions of the coefficients, the posterior distribution can be expressed in closed-form. We simulated measurements from single- and double-tensor models where the correct values of several quantities are known, to validate that the theoretically derived quantiles agree with those observed empirically. We included results from residual bootstrap for comparison and found good agreement. The validation employed several different models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI) and Constrained Spherical Deconvolution (CSD). We also used in vivo data to visualize maps of quantitative features and corresponding uncertainties, and to show how our approach can be used in a group analysis to downweight subjects with high uncertainty. In summary, we convert successful linear models for dMRI signal estimation to probabilistic models, capable of accurate uncertainty quantification. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. An Adaptive Nonlinear Aircraft Maneuvering Envelope Estimation Approach for Online Applications

    NASA Technical Reports Server (NTRS)

    Schuet, Stefan R.; Lombaerts, Thomas Jan; Acosta, Diana; Wheeler, Kevin; Kaneshige, John

    2014-01-01

    A nonlinear aircraft model is presented and used to develop an overall unified robust and adaptive approach to passive trim and maneuverability envelope estimation with uncertainty quantification. The concept of time scale separation makes this method suitable for the online characterization of altered safe maneuvering limitations after impairment. The results can be used to provide pilot feedback and/or be combined with flight planning, trajectory generation, and guidance algorithms to help maintain safe aircraft operations in both nominal and off-nominal scenarios.

  13. Uncertainties in estimates of the risks of late effects from space radiation

    NASA Technical Reports Server (NTRS)

    Cucinotta, F. A.; Schimmerling, W.; Wilson, J. W.; Peterson, L. E.; Saganti, P. B.; Dicello, J. F.

    2004-01-01

    Methods used to project risks in low-Earth orbit are of questionable merit for exploration missions because of the limited radiobiology data and knowledge of galactic cosmic ray (GCR) heavy ions, which causes estimates of the risk of late effects to be highly uncertain. Risk projections involve a product of many biological and physical factors, each of which has a differential range of uncertainty due to lack of data and knowledge. Using the linear-additivity model for radiation risks, we use Monte-Carlo sampling from subjective uncertainty distributions in each factor to obtain an estimate of the overall uncertainty in risk projections. The resulting methodology is applied to several human space exploration mission scenarios including a deep space outpost and Mars missions of duration of 360, 660, and 1000 days. The major results are the quantification of the uncertainties in current risk estimates, the identification of factors that dominate risk projection uncertainties, and the development of a method to quantify candidate approaches to reduce uncertainties or mitigate risks. The large uncertainties in GCR risk projections lead to probability distributions of risk that mask any potential risk reduction using the "optimization" of shielding materials or configurations. In contrast, the design of shielding optimization approaches for solar particle events and trapped protons can be made at this time and promising technologies can be shown to have merit using our approach. The methods used also make it possible to express risk management objectives in terms of quantitative metrics, e.g., the number of days in space without exceeding a given risk level within well-defined confidence limits. Published by Elsevier Ltd on behalf of COSPAR.

  14. Uncertainties in Estimates of the Risks of Late Effects from Space Radiation

    NASA Technical Reports Server (NTRS)

    Cucinotta, F. A.; Schimmerling, W.; Wilson, J. W.; Peterson, L. E.; Saganti, P.; Dicelli, J. F.

    2002-01-01

    The health risks faced by astronauts from space radiation include cancer, cataracts, hereditary effects, and non-cancer morbidity and mortality risks related to the diseases of the old age. Methods used to project risks in low-Earth orbit are of questionable merit for exploration missions because of the limited radiobiology data and knowledge of galactic cosmic ray (GCR) heavy ions, which causes estimates of the risk of late effects to be highly uncertain. Risk projections involve a product of many biological and physical factors, each of which has a differential range of uncertainty due to lack of data and knowledge. Within the linear-additivity model, we use Monte-Carlo sampling from subjective uncertainty distributions in each factor to obtain a Maximum Likelihood estimate of the overall uncertainty in risk projections. The resulting methodology is applied to several human space exploration mission scenarios including ISS, lunar station, deep space outpost, and Mar's missions of duration of 360, 660, and 1000 days. The major results are the quantification of the uncertainties in current risk estimates, the identification of factors that dominate risk projection uncertainties, and the development of a method to quantify candidate approaches to reduce uncertainties or mitigate risks. The large uncertainties in GCR risk projections lead to probability distributions of risk that mask any potential risk reduction using the "optimization" of shielding materials or configurations. In contrast, the design of shielding optimization approaches for solar particle events and trapped protons can be made at this time, and promising technologies can be shown to have merit using our approach. The methods used also make it possible to express risk management objectives in terms of quantitative objective's, i.e., the number of days in space without exceeding a given risk level within well defined confidence limits.

  15. A methodology to estimate uncertainty for emission projections through sensitivity analysis.

    PubMed

    Lumbreras, Julio; de Andrés, Juan Manuel; Pérez, Javier; Borge, Rafael; de la Paz, David; Rodríguez, María Encarnación

    2015-04-01

    Air pollution abatement policies must be based on quantitative information on current and future emissions of pollutants. As emission projections uncertainties are inevitable and traditional statistical treatments of uncertainty are highly time/resources consuming, a simplified methodology for nonstatistical uncertainty estimation based on sensitivity analysis is presented in this work. The methodology was applied to the "with measures" scenario for Spain, concretely over the 12 highest emitting sectors regarding greenhouse gas and air pollutants emissions. Examples of methodology application for two important sectors (power plants, and agriculture and livestock) are shown and explained in depth. Uncertainty bands were obtained up to 2020 by modifying the driving factors of the 12 selected sectors and the methodology was tested against a recomputed emission trend in a low economic-growth perspective and official figures for 2010, showing a very good performance. A solid understanding and quantification of uncertainties related to atmospheric emission inventories and projections provide useful information for policy negotiations. However, as many of those uncertainties are irreducible, there is an interest on how they could be managed in order to derive robust policy conclusions. Taking this into account, a method developed to use sensitivity analysis as a source of information to derive nonstatistical uncertainty bands for emission projections is presented and applied to Spain. This method simplifies uncertainty assessment and allows other countries to take advantage of their sensitivity analyses.

  16. Gap Size Uncertainty Quantification in Advanced Gas Reactor TRISO Fuel Irradiation Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pham, Binh T.; Einerson, Jeffrey J.; Hawkes, Grant L.

    The Advanced Gas Reactor (AGR)-3/4 experiment is the combination of the third and fourth tests conducted within the tristructural isotropic fuel development and qualification research program. The AGR-3/4 test consists of twelve independent capsules containing a fuel stack in the center surrounded by three graphite cylinders and shrouded by a stainless steel shell. This capsule design enables temperature control of both the fuel and the graphite rings by varying the neon/helium gas mixture flowing through the four resulting gaps. Knowledge of fuel and graphite temperatures is crucial for establishing the functional relationship between fission product release and irradiation thermal conditions.more » These temperatures are predicted for each capsule using the commercial finite-element heat transfer code ABAQUS. Uncertainty quantification reveals that the gap size uncertainties are among the dominant factors contributing to predicted temperature uncertainty due to high input sensitivity and uncertainty. Gap size uncertainty originates from the fact that all gap sizes vary with time due to dimensional changes of the fuel compacts and three graphite rings caused by extended exposure to high temperatures and fast neutron irradiation. Gap sizes are estimated using as-fabricated dimensional measurements at the start of irradiation and post irradiation examination dimensional measurements at the end of irradiation. Uncertainties in these measurements provide a basis for quantifying gap size uncertainty. However, lack of gap size measurements during irradiation and lack of knowledge about the dimension change rates lead to gap size modeling assumptions, which could increase gap size uncertainty. In addition, the dimensional measurements are performed at room temperature, and must be corrected to account for thermal expansion of the materials at high irradiation temperatures. Uncertainty in the thermal expansion coefficients for the graphite materials used in the AGR-3/4 capsules also increases gap size uncertainty. This study focuses on analysis of modeling assumptions and uncertainty sources to evaluate their impacts on the gap size uncertainty.« less

  17. Uncertainty quantification and propagation in dynamic models using ambient vibration measurements, application to a 10-story building

    NASA Astrophysics Data System (ADS)

    Behmanesh, Iman; Yousefianmoghadam, Seyedsina; Nozari, Amin; Moaveni, Babak; Stavridis, Andreas

    2018-07-01

    This paper investigates the application of Hierarchical Bayesian model updating for uncertainty quantification and response prediction of civil structures. In this updating framework, structural parameters of an initial finite element (FE) model (e.g., stiffness or mass) are calibrated by minimizing error functions between the identified modal parameters and the corresponding parameters of the model. These error functions are assumed to have Gaussian probability distributions with unknown parameters to be determined. The estimated parameters of error functions represent the uncertainty of the calibrated model in predicting building's response (modal parameters here). The focus of this paper is to answer whether the quantified model uncertainties using dynamic measurement at building's reference/calibration state can be used to improve the model prediction accuracies at a different structural state, e.g., damaged structure. Also, the effects of prediction error bias on the uncertainty of the predicted values is studied. The test structure considered here is a ten-story concrete building located in Utica, NY. The modal parameters of the building at its reference state are identified from ambient vibration data and used to calibrate parameters of the initial FE model as well as the error functions. Before demolishing the building, six of its exterior walls were removed and ambient vibration measurements were also collected from the structure after the wall removal. These data are not used to calibrate the model; they are only used to assess the predicted results. The model updating framework proposed in this paper is applied to estimate the modal parameters of the building at its reference state as well as two damaged states: moderate damage (removal of four walls) and severe damage (removal of six walls). Good agreement is observed between the model-predicted modal parameters and those identified from vibration tests. Moreover, it is shown that including prediction error bias in the updating process instead of commonly-used zero-mean error function can significantly reduce the prediction uncertainties.

  18. Uncertainty Quantification using Exponential Epi-Splines

    DTIC Science & Technology

    2013-06-01

    Leibler divergence. The choice of κ in applications can be informed by the fact that the Kullback - Leibler divergence between two normal densities, ϕ1... of ran- dom output quantities of interests. The framework systematically incorporates hard information derived from physics-based sensors, field test ... information , and determines the ‘best’ estimate within that family. Bayesian estima- tion makes use of prior soft information

  19. The influence of landscape characteristics and home-range size on the quantification of landscape-genetics relationships

    Treesearch

    Tabitha A. Graves; Tzeidle N. Wasserman; Milton Cezar Ribeiro; Erin L. Landguth; Stephen F. Spear; Niko Balkenhol; Colleen B. Higgins; Marie-Josee Fortin; Samuel A. Cushman; Lisette P. Waits

    2012-01-01

    A common approach used to estimate landscape resistance involves comparing correlations of ecological and genetic distances calculated among individuals of a species. However, the location of sampled individuals may contain some degree of spatial uncertainty due to the natural variation of animals moving through their home range ormeasurement error in plant or animal...

  20. Uncertainty Quantification in Remaining Useful Life of Aerospace Components using State Space Models and Inverse FORM

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Goebel, Kai

    2013-01-01

    This paper investigates the use of the inverse first-order reliability method (inverse- FORM) to quantify the uncertainty in the remaining useful life (RUL) of aerospace components. The prediction of remaining useful life is an integral part of system health prognosis, and directly helps in online health monitoring and decision-making. However, the prediction of remaining useful life is affected by several sources of uncertainty, and therefore it is necessary to quantify the uncertainty in the remaining useful life prediction. While system parameter uncertainty and physical variability can be easily included in inverse-FORM, this paper extends the methodology to include: (1) future loading uncertainty, (2) process noise; and (3) uncertainty in the state estimate. The inverse-FORM method has been used in this paper to (1) quickly obtain probability bounds on the remaining useful life prediction; and (2) calculate the entire probability distribution of remaining useful life prediction, and the results are verified against Monte Carlo sampling. The proposed methodology is illustrated using a numerical example.

  1. Mass discharge estimation from contaminated sites: Multi-model solutions for assessment of conceptual uncertainty

    NASA Astrophysics Data System (ADS)

    Thomsen, N. I.; Troldborg, M.; McKnight, U. S.; Binning, P. J.; Bjerg, P. L.

    2012-04-01

    Mass discharge estimates are increasingly being used in the management of contaminated sites. Such estimates have proven useful for supporting decisions related to the prioritization of contaminated sites in a groundwater catchment. Potential management options can be categorised as follows: (1) leave as is, (2) clean up, or (3) further investigation needed. However, mass discharge estimates are often very uncertain, which may hamper the management decisions. If option 1 is incorrectly chosen soil and water quality will decrease, threatening or destroying drinking water resources. The risk of choosing option 2 is to spend money on remediating a site that does not pose a problem. Choosing option 3 will often be safest, but may not be the optimal economic solution. Quantification of the uncertainty in mass discharge estimates can therefore greatly improve the foundation for selecting the appropriate management option. The uncertainty of mass discharge estimates depends greatly on the extent of the site characterization. A good approach for uncertainty estimation will be flexible with respect to the investigation level, and account for both parameter and conceptual model uncertainty. We propose a method for quantifying the uncertainty of dynamic mass discharge estimates from contaminant point sources on the local scale. The method considers both parameter and conceptual uncertainty through a multi-model approach. The multi-model approach evaluates multiple conceptual models for the same site. The different conceptual models consider different source characterizations and hydrogeological descriptions. The idea is to include a set of essentially different conceptual models where each model is believed to be realistic representation of the given site, based on the current level of information. Parameter uncertainty is quantified using Monte Carlo simulations. For each conceptual model we calculate a transient mass discharge estimate with uncertainty bounds resulting from the parametric uncertainty. To quantify the conceptual uncertainty from a given site, we combine the outputs from the different conceptual models using Bayesian model averaging. The weight for each model is obtained by integrating available data and expert knowledge using Bayesian belief networks. The multi-model approach is applied to a contaminated site. At the site a DNAPL (dense non aqueous phase liquid) spill consisting of PCE (perchloroethylene) has contaminated a fractured clay till aquitard overlaying a limestone aquifer. The exact shape and nature of the source is unknown and so is the importance of transport in the fractures. The result of the multi-model approach is a visual representation of the uncertainty of the mass discharge estimates for the site which can be used to support the management options.

  2. Matthew Reynolds | NREL

    Science.gov Websites

    food science. Matthew's research at NREL is focused on applying uncertainty quantification techniques . Research Interests Uncertainty quantification Computational multilinear algebra Approximation theory of and the Canonical Tensor Decomposition, Journal of Computational Physics (2017) Randomized Alternating

  3. Spatial Uncertainty Modeling of Fuzzy Information in Images for Pattern Classification

    PubMed Central

    Pham, Tuan D.

    2014-01-01

    The modeling of the spatial distribution of image properties is important for many pattern recognition problems in science and engineering. Mathematical methods are needed to quantify the variability of this spatial distribution based on which a decision of classification can be made in an optimal sense. However, image properties are often subject to uncertainty due to both incomplete and imprecise information. This paper presents an integrated approach for estimating the spatial uncertainty of vagueness in images using the theory of geostatistics and the calculus of probability measures of fuzzy events. Such a model for the quantification of spatial uncertainty is utilized as a new image feature extraction method, based on which classifiers can be trained to perform the task of pattern recognition. Applications of the proposed algorithm to the classification of various types of image data suggest the usefulness of the proposed uncertainty modeling technique for texture feature extraction. PMID:25157744

  4. From Quantification to Visualization: A Taxonomy of Uncertainty Visualization Approaches

    PubMed Central

    Potter, Kristin; Rosen, Paul; Johnson, Chris R.

    2014-01-01

    Quantifying uncertainty is an increasingly important topic across many domains. The uncertainties present in data come with many diverse representations having originated from a wide variety of disciplines. Communicating these uncertainties is a task often left to visualization without clear connection between the quantification and visualization. In this paper, we first identify frequently occurring types of uncertainty. Second, we connect those uncertainty representations to ones commonly used in visualization. We then look at various approaches to visualizing this uncertainty by partitioning the work based on the dimensionality of the data and the dimensionality of the uncertainty. We also discuss noteworthy exceptions to our taxonomy along with future research directions for the uncertainty visualization community. PMID:25663949

  5. Experimental validation of 2D uncertainty quantification for DIC.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reu, Phillip L.

    Because digital image correlation (DIC) has become such an important and standard tool in the toolbox of experimental mechanicists, a complete uncertainty quantification of the method is needed. It should be remembered that each DIC setup and series of images will have a unique uncertainty based on the calibration quality and the image and speckle quality of the analyzed images. Any pretest work done with a calibrated DIC stereo-rig to quantify the errors using known shapes and translations, while useful, do not necessarily reveal the uncertainty of a later test. This is particularly true with high-speed applications where actual testmore » images are often less than ideal. Work has previously been completed on the mathematical underpinnings of DIC uncertainty quantification and is already published, this paper will present corresponding experimental work used to check the validity of the uncertainty equations.« less

  6. Experimental validation of 2D uncertainty quantification for digital image correlation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reu, Phillip L.

    Because digital image correlation (DIC) has become such an important and standard tool in the toolbox of experimental mechanicists, a complete uncertainty quantification of the method is needed. It should be remembered that each DIC setup and series of images will have a unique uncertainty based on the calibration quality and the image and speckle quality of the analyzed images. Any pretest work done with a calibrated DIC stereo-rig to quantify the errors using known shapes and translations, while useful, do not necessarily reveal the uncertainty of a later test. This is particularly true with high-speed applications where actual testmore » images are often less than ideal. Work has previously been completed on the mathematical underpinnings of DIC uncertainty quantification and is already published, this paper will present corresponding experimental work used to check the validity of the uncertainty equations.« less

  7. Quantifying the uncertainty in discharge data using hydraulic knowledge and uncertain gaugings: a Bayesian method named BaRatin

    NASA Astrophysics Data System (ADS)

    Le Coz, Jérôme; Renard, Benjamin; Bonnifait, Laurent; Branger, Flora; Le Boursicaud, Raphaël; Horner, Ivan; Mansanarez, Valentin; Lang, Michel; Vigneau, Sylvain

    2015-04-01

    River discharge is a crucial variable for Hydrology: as the output variable of most hydrologic models, it is used for sensitivity analyses, model structure identification, parameter estimation, data assimilation, prediction, etc. A major difficulty stems from the fact that river discharge is not measured continuously. Instead, discharge time series used by hydrologists are usually based on simple stage-discharge relations (rating curves) calibrated using a set of direct stage-discharge measurements (gaugings). In this presentation, we present a Bayesian approach (cf. Le Coz et al., 2014) to build such hydrometric rating curves, to estimate the associated uncertainty and to propagate this uncertainty to discharge time series. The three main steps of this approach are described: (1) Hydraulic analysis: identification of the hydraulic controls that govern the stage-discharge relation, identification of the rating curve equation and specification of prior distributions for the rating curve parameters; (2) Rating curve estimation: Bayesian inference of the rating curve parameters, accounting for the individual uncertainties of available gaugings, which often differ according to the discharge measurement procedure and the flow conditions; (3) Uncertainty propagation: quantification of the uncertainty in discharge time series, accounting for both the rating curve uncertainties and the uncertainty of recorded stage values. The rating curve uncertainties combine the parametric uncertainties and the remnant uncertainties that reflect the limited accuracy of the mathematical model used to simulate the physical stage-discharge relation. In addition, we also discuss current research activities, including the treatment of non-univocal stage-discharge relationships (e.g. due to hydraulic hysteresis, vegetation growth, sudden change of the geometry of the section, etc.). An operational version of the BaRatin software and its graphical interface are made available free of charge on request to the authors. J. Le Coz, B. Renard, L. Bonnifait, F. Branger, R. Le Boursicaud (2014). Combining hydraulic knowledge and uncertain gaugings in the estimation of hydrometric rating curves: a Bayesian approach, Journal of Hydrology, 509, 573-587.

  8. Quantification of Uncertainty in Full-Waveform Moment Tensor Inversion for Regional Seismicity

    NASA Astrophysics Data System (ADS)

    Jian, P.; Hung, S.; Tseng, T.

    2013-12-01

    Routinely and instantaneously determined moment tensor solutions deliver basic information for investigating faulting nature of earthquakes and regional tectonic structure. The accuracy of full-waveform moment tensor inversion mostly relies on azimuthal coverage of stations, data quality and previously known earth's structure (i.e., impulse responses or Green's functions). However, intrinsically imperfect station distribution, noise-contaminated waveform records and uncertain earth structure can often result in large deviations of the retrieved source parameters from the true ones, which prohibits the use of routinely reported earthquake catalogs for further structural and tectonic interferences. Duputel et al. (2012) first systematically addressed the significance of statistical uncertainty estimation in earthquake source inversion and exemplified that the data covariance matrix, if prescribed properly to account for data dependence and uncertainty due to incomplete and erroneous data and hypocenter mislocation, cannot only be mapped onto the uncertainty estimate of resulting source parameters, but it also aids obtaining more stable and reliable results. Over the past decade, BATS (Broadband Array in Taiwan for Seismology) has steadily devoted to building up a database of good-quality centroid moment tensor (CMT) solutions for moderate to large magnitude earthquakes that occurred in Taiwan area. Because of the lack of the uncertainty quantification and reliability analysis, it remains controversial to use the reported CMT catalog directly for further investigation of regional tectonics, near-source strong ground motions, and seismic hazard assessment. In this study, we develop a statistical procedure to make quantitative and reliable estimates of uncertainty in regional full-waveform CMT inversion. The linearized inversion scheme adapting efficient estimation of the covariance matrices associated with oversampled noisy waveform data and errors of biased centroid positions is implemented and inspected for improving source parameter determination of regional seismicity in Taiwan. Synthetic inversion tests demonstrate the resolved moment tensors would better match the hypothetical CMT solutions, and tend to suppress unreal non-double-couple components and reduce the trade-off between focal mechanism and centroid depth if individual signal-to-noise ratios and correlation lengths for 3-component seismograms at each station and mislocation uncertainties are properly taken into account. We further testify the capability of our scheme in retrieving the robust CMT information for mid-sized (Mw~3.5) and offshore earthquakes in Taiwan, which offers immediate and broad applications in detailed modelling of regional stress field and deformation pattern and mapping of subsurface velocity structures.

  9. Uncertainty quantification and propagation in nuclear density functional theory

    DOE PAGES

    Schunck, N.; McDonnell, J. D.; Higdon, D.; ...

    2015-12-23

    Nuclear density functional theory (DFT) is one of the main theoretical tools used to study the properties of heavy and superheavy elements, or to describe the structure of nuclei far from stability. While on-going eff orts seek to better root nuclear DFT in the theory of nuclear forces, energy functionals remain semi-phenomenological constructions that depend on a set of parameters adjusted to experimental data in fi nite nuclei. In this study, we review recent eff orts to quantify the related uncertainties, and propagate them to model predictions. In particular, we cover the topics of parameter estimation for inverse problems, statisticalmore » analysis of model uncertainties and Bayesian inference methods. Illustrative examples are taken from the literature.« less

  10. Hybrid time-variant reliability estimation for active control structures under aleatory and epistemic uncertainties

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Xiong, Chuang; Wang, Xiaojun; Li, Yunlong; Xu, Menghui

    2018-04-01

    Considering that multi-source uncertainties from inherent nature as well as the external environment are unavoidable and severely affect the controller performance, the dynamic safety assessment with high confidence is of great significance for scientists and engineers. In view of this, the uncertainty quantification analysis and time-variant reliability estimation corresponding to the closed-loop control problems are conducted in this study under a mixture of random, interval, and convex uncertainties. By combining the state-space transformation and the natural set expansion, the boundary laws of controlled response histories are first confirmed with specific implementation of random items. For nonlinear cases, the collocation set methodology and fourth Rounge-Kutta algorithm are introduced as well. Enlightened by the first-passage model in random process theory as well as by the static probabilistic reliability ideas, a new definition of the hybrid time-variant reliability measurement is provided for the vibration control systems and the related solution details are further expounded. Two engineering examples are eventually presented to demonstrate the validity and applicability of the methodology developed.

  11. Exploiting active subspaces to quantify uncertainty in the numerical simulation of the HyShot II scramjet

    NASA Astrophysics Data System (ADS)

    Constantine, P. G.; Emory, M.; Larsson, J.; Iaccarino, G.

    2015-12-01

    We present a computational analysis of the reactive flow in a hypersonic scramjet engine with focus on effects of uncertainties in the operating conditions. We employ a novel methodology based on active subspaces to characterize the effects of the input uncertainty on the scramjet performance. The active subspace identifies one-dimensional structure in the map from simulation inputs to quantity of interest that allows us to reparameterize the operating conditions; instead of seven physical parameters, we can use a single derived active variable. This dimension reduction enables otherwise infeasible uncertainty quantification, considering the simulation cost of roughly 9500 CPU-hours per run. For two values of the fuel injection rate, we use a total of 68 simulations to (i) identify the parameters that contribute the most to the variation in the output quantity of interest, (ii) estimate upper and lower bounds on the quantity of interest, (iii) classify sets of operating conditions as safe or unsafe corresponding to a threshold on the output quantity of interest, and (iv) estimate a cumulative distribution function for the quantity of interest.

  12. Integrating uncertainty propagation in GNSS radio occultation retrieval: from excess phase to atmospheric bending angle profiles

    NASA Astrophysics Data System (ADS)

    Schwarz, Jakob; Kirchengast, Gottfried; Schwaerz, Marc

    2018-05-01

    Global Navigation Satellite System (GNSS) radio occultation (RO) observations are highly accurate, long-term stable data sets and are globally available as a continuous record from 2001. Essential climate variables for the thermodynamic state of the free atmosphere - such as pressure, temperature, and tropospheric water vapor profiles (involving background information) - can be derived from these records, which therefore have the potential to serve as climate benchmark data. However, to exploit this potential, atmospheric profile retrievals need to be very accurate and the remaining uncertainties quantified and traced throughout the retrieval chain from raw observations to essential climate variables. The new Reference Occultation Processing System (rOPS) at the Wegener Center aims to deliver such an accurate RO retrieval chain with integrated uncertainty propagation. Here we introduce and demonstrate the algorithms implemented in the rOPS for uncertainty propagation from excess phase to atmospheric bending angle profiles, for estimated systematic and random uncertainties, including vertical error correlations and resolution estimates. We estimated systematic uncertainty profiles with the same operators as used for the basic state profiles retrieval. The random uncertainty is traced through covariance propagation and validated using Monte Carlo ensemble methods. The algorithm performance is demonstrated using test day ensembles of simulated data as well as real RO event data from the satellite missions CHAllenging Minisatellite Payload (CHAMP); Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC); and Meteorological Operational Satellite A (MetOp). The results of the Monte Carlo validation show that our covariance propagation delivers correct uncertainty quantification from excess phase to bending angle profiles. The results from the real RO event ensembles demonstrate that the new uncertainty estimation chain performs robustly. Together with the other parts of the rOPS processing chain this part is thus ready to provide integrated uncertainty propagation through the whole RO retrieval chain for the benefit of climate monitoring and other applications.

  13. Uncertainty quantification of CO₂ saturation estimated from electrical resistance tomography data at the Cranfield site

    DOE PAGES

    Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; ...

    2014-06-03

    A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO₂ saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO₂ saturation, but we focus on how the ERT observation errors propagate to the estimated CO₂ saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the priormore » information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO₂ saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO₂ saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO₂ saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO₂ saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO₂ saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed on a laptop or desktop PC.« less

  14. Analyzing the uncertainty of suspended sediment load prediction using sequential data assimilation

    NASA Astrophysics Data System (ADS)

    Leisenring, Marc; Moradkhani, Hamid

    2012-10-01

    SummaryA first step in understanding the impacts of sediment and controlling the sources of sediment is to quantify the mass loading. Since mass loading is the product of flow and concentration, the quantification of loads first requires the quantification of runoff volume. Using the National Weather Service's SNOW-17 and the Sacramento Soil Moisture Accounting (SAC-SMA) models, this study employed particle filter based Bayesian data assimilation methods to predict seasonal snow water equivalent (SWE) and runoff within a small watershed in the Lake Tahoe Basin located in California, USA. A procedure was developed to scale the variance multipliers (a.k.a hyperparameters) for model parameters and predictions based on the accuracy of the mean predictions relative to the ensemble spread. In addition, an online bias correction algorithm based on the lagged average bias was implemented to detect and correct for systematic bias in model forecasts prior to updating with the particle filter. Both of these methods significantly improved the performance of the particle filter without requiring excessively wide prediction bounds. The flow ensemble was linked to a non-linear regression model that was used to predict suspended sediment concentrations (SSCs) based on runoff rate and time of year. Runoff volumes and SSC were then combined to produce an ensemble of suspended sediment load estimates. Annual suspended sediment loads for the 5 years of simulation were finally computed along with 95% prediction intervals that account for uncertainty in both the SSC regression model and flow rate estimates. Understanding the uncertainty associated with annual suspended sediment load predictions is critical for making sound watershed management decisions aimed at maintaining the exceptional clarity of Lake Tahoe. The computational methods developed and applied in this research could assist with similar studies where it is important to quantify the predictive uncertainty of pollutant load estimates.

  15. A methodology for uncertainty quantification in quantitative technology valuation based on expert elicitation

    NASA Astrophysics Data System (ADS)

    Akram, Muhammad Farooq Bin

    The management of technology portfolios is an important element of aerospace system design. New technologies are often applied to new product designs to ensure their competitiveness at the time they are introduced to market. The future performance of yet-to- be designed components is inherently uncertain, necessitating subject matter expert knowledge, statistical methods and financial forecasting. Estimates of the appropriate parameter settings often come from disciplinary experts, who may disagree with each other because of varying experience and background. Due to inherent uncertain nature of expert elicitation in technology valuation process, appropriate uncertainty quantification and propagation is very critical. The uncertainty in defining the impact of an input on performance parameters of a system makes it difficult to use traditional probability theory. Often the available information is not enough to assign the appropriate probability distributions to uncertain inputs. Another problem faced during technology elicitation pertains to technology interactions in a portfolio. When multiple technologies are applied simultaneously on a system, often their cumulative impact is non-linear. Current methods assume that technologies are either incompatible or linearly independent. It is observed that in case of lack of knowledge about the problem, epistemic uncertainty is the most suitable representation of the process. It reduces the number of assumptions during the elicitation process, when experts are forced to assign probability distributions to their opinions without sufficient knowledge. Epistemic uncertainty can be quantified by many techniques. In present research it is proposed that interval analysis and Dempster-Shafer theory of evidence are better suited for quantification of epistemic uncertainty in technology valuation process. Proposed technique seeks to offset some of the problems faced by using deterministic or traditional probabilistic approaches for uncertainty propagation. Non-linear behavior in technology interactions is captured through expert elicitation based technology synergy matrices (TSM). Proposed TSMs increase the fidelity of current technology forecasting methods by including higher order technology interactions. A test case for quantification of epistemic uncertainty on a large scale problem of combined cycle power generation system was selected. A detailed multidisciplinary modeling and simulation environment was adopted for this problem. Results have shown that evidence theory based technique provides more insight on the uncertainties arising from incomplete information or lack of knowledge as compared to deterministic or probability theory methods. Margin analysis was also carried out for both the techniques. A detailed description of TSMs and their usage in conjunction with technology impact matrices and technology compatibility matrices is discussed. Various combination methods are also proposed for higher order interactions, which can be applied according to the expert opinion or historical data. The introduction of technology synergy matrix enabled capturing the higher order technology interactions, and improvement in predicted system performance.

  16. Quantification of source uncertainties in Seismic Probabilistic Tsunami Hazard Analysis (SPTHA)

    NASA Astrophysics Data System (ADS)

    Selva, J.; Tonini, R.; Molinari, I.; Tiberti, M. M.; Romano, F.; Grezio, A.; Melini, D.; Piatanesi, A.; Basili, R.; Lorito, S.

    2016-06-01

    We propose a procedure for uncertainty quantification in Probabilistic Tsunami Hazard Analysis (PTHA), with a special emphasis on the uncertainty related to statistical modelling of the earthquake source in Seismic PTHA (SPTHA), and on the separate treatment of subduction and crustal earthquakes (treated as background seismicity). An event tree approach and ensemble modelling are used in spite of more classical approaches, such as the hazard integral and the logic tree. This procedure consists of four steps: (1) exploration of aleatory uncertainty through an event tree, with alternative implementations for exploring epistemic uncertainty; (2) numerical computation of tsunami generation and propagation up to a given offshore isobath; (3) (optional) site-specific quantification of inundation; (4) simultaneous quantification of aleatory and epistemic uncertainty through ensemble modelling. The proposed procedure is general and independent of the kind of tsunami source considered; however, we implement step 1, the event tree, specifically for SPTHA, focusing on seismic source uncertainty. To exemplify the procedure, we develop a case study considering seismic sources in the Ionian Sea (central-eastern Mediterranean Sea), using the coasts of Southern Italy as a target zone. The results show that an efficient and complete quantification of all the uncertainties is feasible even when treating a large number of potential sources and a large set of alternative model formulations. We also find that (i) treating separately subduction and background (crustal) earthquakes allows for optimal use of available information and for avoiding significant biases; (ii) both subduction interface and crustal faults contribute to the SPTHA, with different proportions that depend on source-target position and tsunami intensity; (iii) the proposed framework allows sensitivity and deaggregation analyses, demonstrating the applicability of the method for operational assessments.

  17. Sonic Boom Pressure Signature Uncertainty Calculation and Propagation to Ground Noise

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Bretl, Katherine N.; Walker, Eric L.; Pinier, Jeremy T.

    2015-01-01

    The objective of this study was to outline an approach for the quantification of uncertainty in sonic boom measurements and to investigate the effect of various near-field uncertainty representation approaches on ground noise predictions. These approaches included a symmetric versus asymmetric uncertainty band representation and a dispersion technique based on a partial sum Fourier series that allows for the inclusion of random error sources in the uncertainty. The near-field uncertainty was propagated to the ground level, along with additional uncertainty in the propagation modeling. Estimates of perceived loudness were obtained for the various types of uncertainty representation in the near-field. Analyses were performed on three configurations of interest to the sonic boom community: the SEEB-ALR, the 69o DeltaWing, and the LM 1021-01. Results showed that representation of the near-field uncertainty plays a key role in ground noise predictions. Using a Fourier series based dispersion approach can double the amount of uncertainty in the ground noise compared to a pure bias representation. Compared to previous computational fluid dynamics results, uncertainty in ground noise predictions were greater when considering the near-field experimental uncertainty.

  18. On uncertainty quantification in hydrogeology and hydrogeophysics

    NASA Astrophysics Data System (ADS)

    Linde, Niklas; Ginsbourger, David; Irving, James; Nobile, Fabio; Doucet, Arnaud

    2017-12-01

    Recent advances in sensor technologies, field methodologies, numerical modeling, and inversion approaches have contributed to unprecedented imaging of hydrogeological properties and detailed predictions at multiple temporal and spatial scales. Nevertheless, imaging results and predictions will always remain imprecise, which calls for appropriate uncertainty quantification (UQ). In this paper, we outline selected methodological developments together with pioneering UQ applications in hydrogeology and hydrogeophysics. The applied mathematics and statistics literature is not easy to penetrate and this review aims at helping hydrogeologists and hydrogeophysicists to identify suitable approaches for UQ that can be applied and further developed to their specific needs. To bypass the tremendous computational costs associated with forward UQ based on full-physics simulations, we discuss proxy-modeling strategies and multi-resolution (Multi-level Monte Carlo) methods. We consider Bayesian inversion for non-linear and non-Gaussian state-space problems and discuss how Sequential Monte Carlo may become a practical alternative. We also describe strategies to account for forward modeling errors in Bayesian inversion. Finally, we consider hydrogeophysical inversion, where petrophysical uncertainty is often ignored leading to overconfident parameter estimation. The high parameter and data dimensions encountered in hydrogeological and geophysical problems make UQ a complicated and important challenge that has only been partially addressed to date.

  19. Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML)

    PubMed Central

    Lechevalier, D.; Ak, R.; Ferguson, M.; Law, K. H.; Lee, Y.-T. T.; Rachuri, S.

    2017-01-01

    This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain. PMID:29202125

  20. Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML).

    PubMed

    Park, J; Lechevalier, D; Ak, R; Ferguson, M; Law, K H; Lee, Y-T T; Rachuri, S

    2017-01-01

    This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain.

  1. How much swamp are we talking here?: Propagating uncertainty about the area of coastal wetlands into the U.S. greenhouse gas inventory

    NASA Astrophysics Data System (ADS)

    Holmquist, J. R.; Crooks, S.; Windham-Myers, L.; Megonigal, P.; Weller, D.; Lu, M.; Bernal, B.; Byrd, K. B.; Morris, J. T.; Troxler, T.; McCombs, J.; Herold, N.

    2017-12-01

    Stable coastal wetlands can store substantial amounts of carbon (C) that can be released when they are degraded or eroded. The EPA recently incorporated coastal wetland net-storage and emissions within the Agricultural Forested and Other Land Uses category of the U.S. National Greenhouse Gas Inventory (NGGI). This was a seminal analysis, but its quantification of uncertainty needs improvement. We provide a value-added analysis by estimating that uncertainty, focusing initially on the most basic assumption, the area of coastal wetlands. We considered three sources: uncertainty in the areas of vegetation and salinity subclasses, uncertainty in the areas of changing or stable wetlands, and uncertainty in the inland extent of coastal wetlands. The areas of vegetation and salinity subtypes, as well as stable or changing, were estimated from 2006 and 2010 maps derived from Landsat imagery by the Coastal Change Analysis Program (C-CAP). We generated unbiased area estimates and confidence intervals for C-CAP, taking into account mapped area, proportional areas of commission and omission errors, as well as the number of observations. We defined the inland extent of wetlands as all land below the current elevation of twice monthly highest tides. We generated probabilistic inundation maps integrating wetland-specific bias and random error in light-detection and ranging elevation maps, with the spatially explicit random error in tidal surfaces generated from tide gauges. This initial uncertainty analysis will be extended to calculate total propagated uncertainty in the NGGI by including the uncertainties in the amount of C lost from eroded and degraded wetlands, stored annually in stable wetlands, and emitted in the form of methane by tidal freshwater wetlands.

  2. Wave-optics uncertainty propagation and regression-based bias model in GNSS radio occultation bending angle retrievals

    NASA Astrophysics Data System (ADS)

    Gorbunov, Michael E.; Kirchengast, Gottfried

    2018-01-01

    A new reference occultation processing system (rOPS) will include a Global Navigation Satellite System (GNSS) radio occultation (RO) retrieval chain with integrated uncertainty propagation. In this paper, we focus on wave-optics bending angle (BA) retrieval in the lower troposphere and introduce (1) an empirically estimated boundary layer bias (BLB) model then employed to reduce the systematic uncertainty of excess phases and bending angles in about the lowest 2 km of the troposphere and (2) the estimation of (residual) systematic uncertainties and their propagation together with random uncertainties from excess phase to bending angle profiles. Our BLB model describes the estimated bias of the excess phase transferred from the estimated bias of the bending angle, for which the model is built, informed by analyzing refractivity fluctuation statistics shown to induce such biases. The model is derived from regression analysis using a large ensemble of Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) RO observations and concurrent European Centre for Medium-Range Weather Forecasts (ECMWF) analysis fields. It is formulated in terms of predictors and adaptive functions (powers and cross products of predictors), where we use six main predictors derived from observations: impact altitude, latitude, bending angle and its standard deviation, canonical transform (CT) amplitude, and its fluctuation index. Based on an ensemble of test days, independent of the days of data used for the regression analysis to establish the BLB model, we find the model very effective for bias reduction and capable of reducing bending angle and corresponding refractivity biases by about a factor of 5. The estimated residual systematic uncertainty, after the BLB profile subtraction, is lower bounded by the uncertainty from the (indirect) use of ECMWF analysis fields but is significantly lower than the systematic uncertainty without BLB correction. The systematic and random uncertainties are propagated from excess phase to bending angle profiles, using a perturbation approach and the wave-optical method recently introduced by Gorbunov and Kirchengast (2015), starting with estimated excess phase uncertainties. The results are encouraging and this uncertainty propagation approach combined with BLB correction enables a robust reduction and quantification of the uncertainties of excess phases and bending angles in the lower troposphere.

  3. Uncertainty in urban flood damage assessment due to urban drainage modelling and depth-damage curve estimation.

    PubMed

    Freni, G; La Loggia, G; Notaro, V

    2010-01-01

    Due to the increased occurrence of flooding events in urban areas, many procedures for flood damage quantification have been defined in recent decades. The lack of large databases in most cases is overcome by combining the output of urban drainage models and damage curves linking flooding to expected damage. The application of advanced hydraulic models as diagnostic, design and decision-making support tools has become a standard practice in hydraulic research and application. Flooding damage functions are usually evaluated by a priori estimation of potential damage (based on the value of exposed goods) or by interpolating real damage data (recorded during historical flooding events). Hydraulic models have undergone continuous advancements, pushed forward by increasing computer capacity. The details of the flooding propagation process on the surface and the details of the interconnections between underground and surface drainage systems have been studied extensively in recent years, resulting in progressively more reliable models. The same level of was advancement has not been reached with regard to damage curves, for which improvements are highly connected to data availability; this remains the main bottleneck in the expected flooding damage estimation. Such functions are usually affected by significant uncertainty intrinsically related to the collected data and to the simplified structure of the adopted functional relationships. The present paper aimed to evaluate this uncertainty by comparing the intrinsic uncertainty connected to the construction of the damage-depth function to the hydraulic model uncertainty. In this way, the paper sought to evaluate the role of hydraulic model detail level in the wider context of flood damage estimation. This paper demonstrated that the use of detailed hydraulic models might not be justified because of the higher computational cost and the significant uncertainty in damage estimation curves. This uncertainty occurs mainly because a large part of the total uncertainty is dependent on depth-damage curves. Improving the estimation of these curves may provide better results in term of uncertainty reduction than the adoption of detailed hydraulic models.

  4. Uncertainty in temperature-based determination of time of death

    NASA Astrophysics Data System (ADS)

    Weiser, Martin; Erdmann, Bodo; Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Mall, Gita; Zachow, Stefan

    2018-03-01

    Temperature-based estimation of time of death (ToD) can be performed either with the help of simple phenomenological models of corpse cooling or with detailed mechanistic (thermodynamic) heat transfer models. The latter are much more complex, but allow a higher accuracy of ToD estimation as in principle all relevant cooling mechanisms can be taken into account. The potentially higher accuracy depends on the accuracy of tissue and environmental parameters as well as on the geometric resolution. We investigate the impact of parameter variations and geometry representation on the estimated ToD. For this, numerical simulation of analytic heat transport models is performed on a highly detailed 3D corpse model, that has been segmented and geometrically reconstructed from a computed tomography (CT) data set, differentiating various organs and tissue types. From that and prior information available on thermal parameters and their variability, we identify the most crucial parameters to measure or estimate, and obtain an a priori uncertainty quantification for the ToD.

  5. DESIGN CHARACTERISTICS OF THE IDAHO NATIONAL LABORATORY HIGH-TEMPERATURE GAS-COOLED TEST REACTOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sterbentz, James; Bayless, Paul; Strydom, Gerhard

    2016-11-01

    Uncertainty and sensitivity analysis is an indispensable element of any substantial attempt in reactor simulation validation. The quantification of uncertainties in nuclear engineering has grown more important and the IAEA Coordinated Research Program (CRP) on High-Temperature Gas Cooled Reactor (HTGR) initiated in 2012 aims to investigate the various uncertainty quantification methodologies for this type of reactors. The first phase of the CRP is dedicated to the estimation of cell and lattice model uncertainties due to the neutron cross sections co-variances. Phase II is oriented towards the investigation of propagated uncertainties from the lattice to the coupled neutronics/thermal hydraulics core calculations.more » Nominal results for the prismatic single block (Ex.I-2a) and super cell models (Ex.I-2c) have been obtained using the SCALE 6.1.3 two-dimensional lattice code NEWT coupled to the TRITON sequence for cross section generation. In this work, the TRITON/NEWT-flux-weighted cross sections obtained for Ex.I-2a and various models of Ex.I-2c is utilized to perform a sensitivity analysis of the MHTGR-350 core power densities and eigenvalues. The core solutions are obtained with the INL coupled code PHISICS/RELAP5-3D, utilizing a fixed-temperature feedback for Ex. II-1a.. It is observed that the core power density does not vary significantly in shape, but the magnitude of these variations increases as the moderator-to-fuel ratio increases in the super cell lattice models.« less

  6. Towards robust quantification and reduction of uncertainty in hydrologic predictions: Integration of particle Markov chain Monte Carlo and factorial polynomial chaos expansion

    NASA Astrophysics Data System (ADS)

    Wang, S.; Huang, G. H.; Baetz, B. W.; Ancell, B. C.

    2017-05-01

    The particle filtering techniques have been receiving increasing attention from the hydrologic community due to its ability to properly estimate model parameters and states of nonlinear and non-Gaussian systems. To facilitate a robust quantification of uncertainty in hydrologic predictions, it is necessary to explicitly examine the forward propagation and evolution of parameter uncertainties and their interactions that affect the predictive performance. This paper presents a unified probabilistic framework that merges the strengths of particle Markov chain Monte Carlo (PMCMC) and factorial polynomial chaos expansion (FPCE) algorithms to robustly quantify and reduce uncertainties in hydrologic predictions. A Gaussian anamorphosis technique is used to establish a seamless bridge between the data assimilation using the PMCMC and the uncertainty propagation using the FPCE through a straightforward transformation of posterior distributions of model parameters. The unified probabilistic framework is applied to the Xiangxi River watershed of the Three Gorges Reservoir (TGR) region in China to demonstrate its validity and applicability. Results reveal that the degree of spatial variability of soil moisture capacity is the most identifiable model parameter with the fastest convergence through the streamflow assimilation process. The potential interaction between the spatial variability in soil moisture conditions and the maximum soil moisture capacity has the most significant effect on the performance of streamflow predictions. In addition, parameter sensitivities and interactions vary in magnitude and direction over time due to temporal and spatial dynamics of hydrologic processes.

  7. The Need for Precise and Well-documented Experimental Data on Prompt Fission Neutron Spectra from Neutron-induced Fission of {sup 239}Pu

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neudecker, D., E-mail: dneudecker@lanl.gov; Taddeucci, T.N.; Haight, R.C.

    2016-01-15

    The spectrum of neutrons emitted promptly after {sup 239}Pu(n,f)—a so-called prompt fission neutron spectrum (PFNS)—is a quantity of high interest, for instance, for reactor physics and global security. However, there are only few experimental data sets available that are suitable for evaluations. In addition, some of those data sets differ by more than their 1-σ uncertainty boundaries. We present the results of MCNP studies indicating that these differences are partly caused by underestimated multiple scattering contributions, over-corrected background, and inconsistent deconvolution methods. A detailed uncertainty quantification for suitable experimental data was undertaken including these effects, and test-evaluations were performed withmore » the improved uncertainty information. The test-evaluations illustrate that the inadequately estimated effects and detailed uncertainty quantification have an impact on the evaluated PFNS and associated uncertainties as well as the neutron multiplicity of selected critical assemblies. A summary of data and documentation needs to improve the quality of the experimental database is provided based on the results of simulations and test-evaluations. Given the possibly substantial distortion of the PFNS by multiple scattering and background effects, special care should be taken to reduce these effects in future measurements, e.g., by measuring the {sup 239}Pu PFNS as a ratio to either the {sup 235}U or {sup 252}Cf PFNS.« less

  8. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less

  9. Quantification for complex assessment: uncertainty estimation in final year project thesis assessment

    NASA Astrophysics Data System (ADS)

    Kim, Ho Sung

    2013-12-01

    A quantitative method for estimating an expected uncertainty (reliability and validity) in assessment results arising from the relativity between four variables, viz examiner's expertise, examinee's expertise achieved, assessment task difficulty and examinee's performance, was developed for the complex assessment applicable to final year project thesis assessment including peer assessment. A guide map can be generated by the method for finding expected uncertainties prior to the assessment implementation with a given set of variables. It employs a scale for visualisation of expertise levels, derivation of which is based on quantified clarities of mental images for levels of the examiner's expertise and the examinee's expertise achieved. To identify the relevant expertise areas that depend on the complexity in assessment format, a graphical continuum model was developed. The continuum model consists of assessment task, assessment standards and criterion for the transition towards the complex assessment owing to the relativity between implicitness and explicitness and is capable of identifying areas of expertise required for scale development.

  10. Uncertainty quantification in operational modal analysis with stochastic subspace identification: Validation and applications

    NASA Astrophysics Data System (ADS)

    Reynders, Edwin; Maes, Kristof; Lombaert, Geert; De Roeck, Guido

    2016-01-01

    Identified modal characteristics are often used as a basis for the calibration and validation of dynamic structural models, for structural control, for structural health monitoring, etc. It is therefore important to know their accuracy. In this article, a method for estimating the (co)variance of modal characteristics that are identified with the stochastic subspace identification method is validated for two civil engineering structures. The first structure is a damaged prestressed concrete bridge for which acceleration and dynamic strain data were measured in 36 different setups. The second structure is a mid-rise building for which acceleration data were measured in 10 different setups. There is a good quantitative agreement between the predicted levels of uncertainty and the observed variability of the eigenfrequencies and damping ratios between the different setups. The method can therefore be used with confidence for quantifying the uncertainty of the identified modal characteristics, also when some or all of them are estimated from a single batch of vibration data. Furthermore, the method is seen to yield valuable insight in the variability of the estimation accuracy from mode to mode and from setup to setup: the more informative a setup is regarding an estimated modal characteristic, the smaller is the estimated variance.

  11. Quantifying and reducing statistical uncertainty in sample-based health program costing studies in low- and middle-income countries.

    PubMed

    Rivera-Rodriguez, Claudia L; Resch, Stephen; Haneuse, Sebastien

    2018-01-01

    In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty.

  12. Quantifying and reducing statistical uncertainty in sample-based health program costing studies in low- and middle-income countries

    PubMed Central

    Resch, Stephen

    2018-01-01

    Objectives: In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. Methods: We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. Results: A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Conclusion: Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty. PMID:29636964

  13. 3.8 Proposed approach to uncertainty quantification and sensitivity analysis in the next PA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flach, Greg; Wohlwend, Jen

    2017-10-02

    This memorandum builds upon Section 3.8 of SRNL (2016) and Flach (2017) by defining key error analysis, uncertainty quantification, and sensitivity analysis concepts and terms, in preparation for the next E-Area Performance Assessment (WSRC 2008) revision.

  14. Characterization of the efficiency and uncertainty of skimmed milk flocculation for the simultaneous concentration and quantification of water-borne viruses, bacteria and protozoa.

    PubMed

    Gonzales-Gustavson, Eloy; Cárdenas-Youngs, Yexenia; Calvo, Miquel; da Silva, Marcelle Figueira Marques; Hundesa, Ayalkibet; Amorós, Inmaculada; Moreno, Yolanda; Moreno-Mesonero, Laura; Rosell, Rosa; Ganges, Llilianne; Araujo, Rosa; Girones, Rosina

    2017-03-01

    In this study, the use of skimmed milk flocculation (SMF) to simultaneously concentrate viruses, bacteria and protozoa was evaluated. We selected strains of faecal indicator bacteria and pathogens, such as Escherichia coli and Helicobacter pylori. The viruses selected were adenovirus (HAdV 35), rotavirus (RoV SA-11), the bacteriophage MS2 and bovine viral diarrhoea virus (BVDV). The protozoa tested were Acanthamoeba, Giardia and Cryptosporidium. The mean recoveries with q(RT)PCR were 66% (HAdV 35), 24% (MS2), 28% (RoV SA-11), 15% (BVDV), 60% (E. coli), 30% (H. pylori) and 21% (Acanthamoeba castellanii). When testing the infectivity, the mean recoveries were 59% (HAdV 35), 12% (MS2), 26% (RoV SA-11) and 0.7% (BVDV). The protozoa Giardia lamblia and Cryptosporidium parvum were studied by immunofluorescence with recoveries of 18% and 13%, respectively. Although q(RT)PCR consistently showed higher quantification values (as expected), q(RT)PCR and the infectivity assays showed similar recoveries for HAdV 35 and RoV SA-11. Additionally, we investigated modelling the variability and uncertainty of the recovery with this method to extrapolate the quantification obtained by q(RT)PCR and estimate the real concentration. The 95% prediction intervals of the real concentration of the microorganisms inoculated were calculated using a general non-parametric bootstrap procedure adapted in our context to estimate the technical error of the measurements. SMF shows recoveries with a low variability that permits the use of a mathematical approximation to predict the concentration of the pathogen and indicator with acceptable low intervals. The values of uncertainty may be used for a quantitative microbial risk analysis or diagnostic purposes. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  15. Uncertainty quantification in fission cross section measurements at LANSCE

    DOE PAGES

    Tovesson, F.

    2015-01-09

    Neutron-induced fission cross sections have been measured for several isotopes of uranium and plutonium at the Los Alamos Neutron Science Center (LANSCE) over a wide range of incident neutron energies. The total uncertainties in these measurements are in the range 3–5% above 100 keV of incident neutron energy, which results from uncertainties in the target, neutron source, and detector system. The individual sources of uncertainties are assumed to be uncorrelated, however correlation in the cross section across neutron energy bins are considered. The quantification of the uncertainty contributions will be described here.

  16. Overview of the AVT-191 Project to Assess Sensitivity Analysis and Uncertainty Quantification Methods for Military Vehicle Design

    NASA Technical Reports Server (NTRS)

    Benek, John A.; Luckring, James M.

    2017-01-01

    A NATO symposium held in 2008 identified many promising sensitivity analysis and un-certainty quantification technologies, but the maturity and suitability of these methods for realistic applications was not known. The STO Task Group AVT-191 was established to evaluate the maturity and suitability of various sensitivity analysis and uncertainty quantification methods for application to realistic problems of interest to NATO. The program ran from 2011 to 2015, and the work was organized into four discipline-centric teams: external aerodynamics, internal aerodynamics, aeroelasticity, and hydrodynamics. This paper presents an overview of the AVT-191 program content.

  17. Summary Findings from the AVT-191 Project to Assess Sensitivity Analysis and Uncertainty Quantification Methods for Military Vehicle Design

    NASA Technical Reports Server (NTRS)

    Benek, John A.; Luckring, James M.

    2017-01-01

    A NATO symposium held in Greece in 2008 identified many promising sensitivity analysis and uncertainty quantification technologies, but the maturity and suitability of these methods for realistic applications was not clear. The NATO Science and Technology Organization, Task Group AVT-191 was established to evaluate the maturity and suitability of various sensitivity analysis and uncertainty quantification methods for application to realistic vehicle development problems. The program ran from 2011 to 2015, and the work was organized into four discipline-centric teams: external aerodynamics, internal aerodynamics, aeroelasticity, and hydrodynamics. This paper summarizes findings and lessons learned from the task group.

  18. Bayesian flood forecasting methods: A review

    NASA Astrophysics Data System (ADS)

    Han, Shasha; Coulibaly, Paulin

    2017-08-01

    Over the past few decades, floods have been seen as one of the most common and largely distributed natural disasters in the world. If floods could be accurately forecasted in advance, then their negative impacts could be greatly minimized. It is widely recognized that quantification and reduction of uncertainty associated with the hydrologic forecast is of great importance for flood estimation and rational decision making. Bayesian forecasting system (BFS) offers an ideal theoretic framework for uncertainty quantification that can be developed for probabilistic flood forecasting via any deterministic hydrologic model. It provides suitable theoretical structure, empirically validated models and reasonable analytic-numerical computation method, and can be developed into various Bayesian forecasting approaches. This paper presents a comprehensive review on Bayesian forecasting approaches applied in flood forecasting from 1999 till now. The review starts with an overview of fundamentals of BFS and recent advances in BFS, followed with BFS application in river stage forecasting and real-time flood forecasting, then move to a critical analysis by evaluating advantages and limitations of Bayesian forecasting methods and other predictive uncertainty assessment approaches in flood forecasting, and finally discusses the future research direction in Bayesian flood forecasting. Results show that the Bayesian flood forecasting approach is an effective and advanced way for flood estimation, it considers all sources of uncertainties and produces a predictive distribution of the river stage, river discharge or runoff, thus gives more accurate and reliable flood forecasts. Some emerging Bayesian forecasting methods (e.g. ensemble Bayesian forecasting system, Bayesian multi-model combination) were shown to overcome limitations of single model or fixed model weight and effectively reduce predictive uncertainty. In recent years, various Bayesian flood forecasting approaches have been developed and widely applied, but there is still room for improvements. Future research in the context of Bayesian flood forecasting should be on assimilation of various sources of newly available information and improvement of predictive performance assessment methods.

  19. Uncertainty quantification for nuclear density functional theory and information content of new measurements.

    PubMed

    McDonnell, J D; Schunck, N; Higdon, D; Sarich, J; Wild, S M; Nazarewicz, W

    2015-03-27

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. The example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.

  20. Interval-based reconstruction for uncertainty quantification in PET

    NASA Astrophysics Data System (ADS)

    Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis

    2018-02-01

    A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.

  1. Uncertainty quantification of Antarctic contribution to sea-level rise using the fast Elementary Thermomechanical Ice Sheet (f.ETISh) model

    NASA Astrophysics Data System (ADS)

    Bulthuis, Kevin; Arnst, Maarten; Pattyn, Frank; Favier, Lionel

    2017-04-01

    Uncertainties in sea-level rise projections are mostly due to uncertainties in Antarctic ice-sheet predictions (IPCC AR5 report, 2013), because key parameters related to the current state of the Antarctic ice sheet (e.g. sub-ice-shelf melting) and future climate forcing are poorly constrained. Here, we propose to improve the predictions of Antarctic ice-sheet behaviour using new uncertainty quantification methods. As opposed to ensemble modelling (Bindschadler et al., 2013) which provides a rather limited view on input and output dispersion, new stochastic methods (Le Maître and Knio, 2010) can provide deeper insight into the impact of uncertainties on complex system behaviour. Such stochastic methods usually begin with deducing a probabilistic description of input parameter uncertainties from the available data. Then, the impact of these input parameter uncertainties on output quantities is assessed by estimating the probability distribution of the outputs by means of uncertainty propagation methods such as Monte Carlo methods or stochastic expansion methods. The use of such uncertainty propagation methods in glaciology may be computationally costly because of the high computational complexity of ice-sheet models. This challenge emphasises the importance of developing reliable and computationally efficient ice-sheet models such as the f.ETISh ice-sheet model (Pattyn, 2015), a new fast thermomechanical coupled ice sheet/ice shelf model capable of handling complex and critical processes such as the marine ice-sheet instability mechanism. Here, we apply these methods to investigate the role of uncertainties in sub-ice-shelf melting, calving rates and climate projections in assessing Antarctic contribution to sea-level rise for the next centuries using the f.ETISh model. We detail the methods and show results that provide nominal values and uncertainty bounds for future sea-level rise as a reflection of the impact of the input parameter uncertainties under consideration, as well as a ranking of the input parameter uncertainties in the order of the significance of their contribution to uncertainty in future sea-level rise. In addition, we discuss how limitations posed by the available information (poorly constrained data) pose challenges that motivate our current research.

  2. Quantification of uncertainty for fluid flow in heterogeneous petroleum reservoirs

    NASA Astrophysics Data System (ADS)

    Zhang, Dongxiao

    Detailed description of the heterogeneity of oil/gas reservoirs is needed to make performance predictions of oil/gas recovery. However, only limited measurements at a few locations are usually available. This combination of significant spatial heterogeneity with incomplete information about it leads to uncertainty about the values of reservoir properties and thus, to uncertainty in estimates of production potential. The theory of stochastic processes provides a natural method for evaluating these uncertainties. In this study, we present a stochastic analysis of transient, single phase flow in heterogeneous reservoirs. We derive general equations governing the statistical moments of flow quantities by perturbation expansions. These moments can be used to construct confidence intervals for the flow quantities (e.g., pressure and flow rate). The moment equations are deterministic and can be solved numerically with existing solvers. The proposed moment equation approach has certain advantages over the commonly used Monte Carlo approach.

  3. Propagating Mixed Uncertainties in Cyber Attacker Payoffs: Exploration of Two-Phase Monte Carlo Sampling and Probability Bounds Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.

    Securing cyber-systems on a continual basis against a multitude of adverse events is a challenging undertaking. Game-theoretic approaches, that model actions of strategic decision-makers, are increasingly being applied to address cybersecurity resource allocation challenges. Such game-based models account for multiple player actions and represent cyber attacker payoffs mostly as point utility estimates. Since a cyber-attacker’s payoff generation mechanism is largely unknown, appropriate representation and propagation of uncertainty is a critical task. In this paper we expand on prior work and focus on operationalizing the probabilistic uncertainty quantification framework, for a notional cyber system, through: 1) representation of uncertain attacker andmore » system-related modeling variables as probability distributions and mathematical intervals, and 2) exploration of uncertainty propagation techniques including two-phase Monte Carlo sampling and probability bounds analysis.« less

  4. Uncertainty quantification and validation of combined hydrological and macroeconomic analyses.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, Jacquelynne; Parks, Mancel Jordan; Jennings, Barbara Joan

    2010-09-01

    Changes in climate can lead to instabilities in physical and economic systems, particularly in regions with marginal resources. Global climate models indicate increasing global mean temperatures over the decades to come and uncertainty in the local to national impacts means perceived risks will drive planning decisions. Agent-based models provide one of the few ways to evaluate the potential changes in behavior in coupled social-physical systems and to quantify and compare risks. The current generation of climate impact analyses provides estimates of the economic cost of climate change for a limited set of climate scenarios that account for a small subsetmore » of the dynamics and uncertainties. To better understand the risk to national security, the next generation of risk assessment models must represent global stresses, population vulnerability to those stresses, and the uncertainty in population responses and outcomes that could have a significant impact on U.S. national security.« less

  5. Uncertainties in Projecting Risks of Late Effects from Space Radiation

    NASA Astrophysics Data System (ADS)

    Cucinotta, F.; Schimmerling, W.; Peterson, L.; Wilson, J.; Saganti, P.; Dicello, J.

    The health risks faced by astronauts from space radiation include cancer, cataracts, hereditary effects, CNS risks, and non - cancer morbidity and mortality risks related to the diseases of the old age. Methods used to project risks in low -Earth orbit are of questionable merit for exploration missions because of the limited radiobiology data and knowledge of galactic cosmic ray (GCR) heavy ions, which causes estimates of the risk of late effects to be highly uncertain. Risk projections involve a product of many biological and physical factors, each of which has a differential range of uncertainty due to lack of data and knowledge. Within the linear-additivity model, we use Monte-Carlo sampling from subjective uncertainty distributions in each factor to obtain a maximum likelihood estimate of the overall uncertainty in risk projections. The resulting methodology is applied to several human space exploration mission scenarios including ISS, lunar station, deep space outpost, and Mar's missions of duration of 360, 660, and 1000 days. The major results are the quantification of the uncertainties in current risk estimates, the identification of the primary factors that dominate risk projection uncertainties, and the development of a method to quantify candidate approaches to reduce uncertainties or mitigate risks. The large uncertainties in GCR risk projections lead to probability distributions of risk that mask any potential risk reduction using the "optimization" of shielding materials or configurations. In contrast, the design of shielding optimization approaches for solar particle events and trapped protons can be made at this time, and promising technologies can be shown to have merit using our approach. The methods used also make it possible to express risk management objectives in terms of quantitative objectives, i.e., number of days in space without exceeding a given risk level within well defined confidence limits

  6. On the short-term uncertainty in performance f a point absorber wave energy converter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coe, Ryan Geoffrey; Michelen, Carlos; Manuel, Lance

    2016-03-01

    Of interest, in this study, is the quantification of uncertainty in the performance of a two-body wave point absorber (Reference Model 3 or RM3), which serves as a wave energy converter (WEC). We demonstrate how simulation tools may be used to establish short-term relationships between any performance parameter of the WEC device and wave height in individual sea states. We demonstrate this methodology for two sea states. Efficient structural reliability methods, validated using more expensive Monte Carlo sampling, allow the estimation of uncertainty in performance of the device. Such methods, when combined with metocean data quantifying the likelihood of differentmore » sea states, can be useful in long-term studies and in reliability-based design.« less

  7. Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Nikbay, Melike; Heeg, Jennifer

    2017-01-01

    This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.

  8. Probabilistic risk assessment for a loss of coolant accident in McMaster Nuclear Reactor and application of reliability physics model for modeling human reliability

    NASA Astrophysics Data System (ADS)

    Ha, Taesung

    A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.

  9. Reference tissue quantification of DCE-MRI data without a contrast agent calibration

    NASA Astrophysics Data System (ADS)

    Walker-Samuel, Simon; Leach, Martin O.; Collins, David J.

    2007-02-01

    The quantification of dynamic contrast-enhanced (DCE) MRI data conventionally requires a conversion from signal intensity to contrast agent concentration by measuring a change in the tissue longitudinal relaxation rate, R1. In this paper, it is shown that the use of a spoiled gradient-echo acquisition sequence (optimized so that signal intensity scales linearly with contrast agent concentration) in conjunction with a reference tissue-derived vascular input function (VIF), avoids the need for the conversion to Gd-DTPA concentration. This study evaluates how to optimize such sequences and which dynamic time-series parameters are most suitable for this type of analysis. It is shown that signal difference and relative enhancement provide useful alternatives when full contrast agent quantification cannot be achieved, but that pharmacokinetic parameters derived from both contain sources of error (such as those caused by differences between reference tissue and region of interest proton density and native T1 values). It is shown in a rectal cancer study that these sources of uncertainty are smaller when using signal difference, compared with relative enhancement (15 ± 4% compared with 33 ± 4%). Both of these uncertainties are of the order of those associated with the conversion to Gd-DTPA concentration, according to literature estimates.

  10. Full uncertainty quantification of N2O and NO emissions using the biogeochemical model LandscapeDNDC on site and regional scale

    NASA Astrophysics Data System (ADS)

    Haas, Edwin; Santabarbara, Ignacio; Kiese, Ralf; Butterbach-Bahl, Klaus

    2017-04-01

    Numerical simulation models are increasingly used to estimate greenhouse gas emissions at site to regional / national scale and are outlined as the most advanced methodology (Tier 3) in the framework of UNFCCC reporting. Process-based models incorporate the major processes of the carbon and nitrogen cycle of terrestrial ecosystems and are thus thought to be widely applicable at various conditions and spatial scales. Process based modelling requires high spatial resolution input data on soil properties, climate drivers and management information. The acceptance of model based inventory calculations depends on the assessment of the inventory's uncertainty (model, input data and parameter induced uncertainties). In this study we fully quantify the uncertainty in modelling soil N2O and NO emissions from arable, grassland and forest soils using the biogeochemical model LandscapeDNDC. We address model induced uncertainty (MU) by contrasting two different soil biogeochemistry modules within LandscapeDNDC. The parameter induced uncertainty (PU) was assessed by using joint parameter distributions for key parameters describing microbial C and N turnover processes as obtained by different Bayesian calibration studies for each model configuration. Input data induced uncertainty (DU) was addressed by Bayesian calibration of soil properties, climate drivers and agricultural management practices data. For the MU, DU and PU we performed several hundred simulations each to contribute to the individual uncertainty assessment. For the overall uncertainty quantification we assessed the model prediction probability, followed by sampled sets of input datasets and parameter distributions. Statistical analysis of the simulation results have been used to quantify the overall full uncertainty of the modelling approach. With this study we can contrast the variation in model results to the different sources of uncertainties for each ecosystem. Further we have been able to perform a fully uncertainty analysis for modelling N2O and NO emissions from arable, grassland and forest soils necessary for the comprehensibility of modelling results. We have applied the methodology to a regional inventory to assess the overall modelling uncertainty for a regional N2O and NO emissions inventory for the state of Saxony, Germany.

  11. Effects of RNA integrity on transcript quantification by total RNA sequencing of clinically collected human placental samples.

    PubMed

    Reiman, Mario; Laan, Maris; Rull, Kristiina; Sõber, Siim

    2017-08-01

    RNA degradation is a ubiquitous process that occurs in living and dead cells, as well as during handling and storage of extracted RNA. Reduced RNA quality caused by degradation is an established source of uncertainty for all RNA-based gene expression quantification techniques. RNA sequencing is an increasingly preferred method for transcriptome analyses, and dependence of its results on input RNA integrity is of significant practical importance. This study aimed to characterize the effects of varying input RNA integrity [estimated as RNA integrity number (RIN)] on transcript level estimates and delineate the characteristic differences between transcripts that differ in degradation rate. The study used ribodepleted total RNA sequencing data from a real-life clinically collected set ( n = 32) of human solid tissue (placenta) samples. RIN-dependent alterations in gene expression profiles were quantified by using DESeq2 software. Our results indicate that small differences in RNA integrity affect gene expression quantification by introducing a moderate and pervasive bias in expression level estimates that significantly affected 8.1% of studied genes. The rapidly degrading transcript pool was enriched in pseudogenes, short noncoding RNAs, and transcripts with extended 3' untranslated regions. Typical slowly degrading transcripts (median length, 2389 nt) represented protein coding genes with 4-10 exons and high guanine-cytosine content.-Reiman, M., Laan, M., Rull, K., Sõber, S. Effects of RNA integrity on transcript quantification by total RNA sequencing of clinically collected human placental samples. © FASEB.

  12. Development of Flight-Test Performance Estimation Techniques for Small Unmanned Aerial Systems

    NASA Astrophysics Data System (ADS)

    McCrink, Matthew Henry

    This dissertation provides a flight-testing framework for assessing the performance of fixed-wing, small-scale unmanned aerial systems (sUAS) by leveraging sub-system models of components unique to these vehicles. The development of the sub-system models, and their links to broader impacts on sUAS performance, is the key contribution of this work. The sub-system modeling and analysis focuses on the vehicle's propulsion, navigation and guidance, and airframe components. Quantification of the uncertainty in the vehicle's power available and control states is essential for assessing the validity of both the methods and results obtained from flight-tests. Therefore, detailed propulsion and navigation system analyses are presented to validate the flight testing methodology. Propulsion system analysis required the development of an analytic model of the propeller in order to predict the power available over a range of flight conditions. The model is based on the blade element momentum (BEM) method. Additional corrections are added to the basic model in order to capture the Reynolds-dependent scale effects unique to sUAS. The model was experimentally validated using a ground based testing apparatus. The BEM predictions and experimental analysis allow for a parameterized model relating the electrical power, measurable during flight, to the power available required for vehicle performance analysis. Navigation system details are presented with a specific focus on the sensors used for state estimation, and the resulting uncertainty in vehicle state. Uncertainty quantification is provided by detailed calibration techniques validated using quasi-static and hardware-in-the-loop (HIL) ground based testing. The HIL methods introduced use a soft real-time flight simulator to provide inertial quality data for assessing overall system performance. Using this tool, the uncertainty in vehicle state estimation based on a range of sensors, and vehicle operational environments is presented. The propulsion and navigation system models are used to evaluate flight-testing methods for evaluating fixed-wing sUAS performance. A brief airframe analysis is presented to provide a foundation for assessing the efficacy of the flight-test methods. The flight-testing presented in this work is focused on validating the aircraft drag polar, zero-lift drag coefficient, and span efficiency factor. Three methods are detailed and evaluated for estimating these design parameters. Specific focus is placed on the influence of propulsion and navigation system uncertainty on the resulting performance data. Performance estimates are used in conjunction with the propulsion model to estimate the impact sensor and measurement uncertainty on the endurance and range of a fixed-wing sUAS. Endurance and range results for a simplistic power available model are compared to the Reynolds-dependent model presented in this work. Additional parameter sensitivity analysis related to state estimation uncertainties encountered in flight-testing are presented. Results from these analyses indicate that the sub-system models introduced in this work are of first-order importance, on the order of 5-10% change in range and endurance, in assessing the performance of a fixed-wing sUAS.

  13. Nuclear Data Uncertainty Quantification: Past, Present and Future

    NASA Astrophysics Data System (ADS)

    Smith, D. L.

    2015-01-01

    An historical overview is provided of the mathematical foundations of uncertainty quantification and the roles played in the more recent past by nuclear data uncertainties in nuclear data evaluations and nuclear applications. Significant advances that have established the mathematical framework for contemporary nuclear data evaluation methods, as well as the use of uncertainty information in nuclear data evaluation and nuclear applications, are described. This is followed by a brief examination of the current status concerning nuclear data evaluation methodology, covariance data generation, and the application of evaluated nuclear data uncertainties in contemporary nuclear technology. A few possible areas for future investigation of this subject are also suggested.

  14. Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.

    2002-01-01

    Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.

  15. Impact of Hydrogeological Uncertainty on Estimation of Environmental Risks Posed by Hydrocarbon Transportation Networks

    NASA Astrophysics Data System (ADS)

    Ciriello, V.; Lauriola, I.; Bonvicini, S.; Cozzani, V.; Di Federico, V.; Tartakovsky, Daniel M.

    2017-11-01

    Ubiquitous hydrogeological uncertainty undermines the veracity of quantitative predictions of soil and groundwater contamination due to accidental hydrocarbon spills from onshore pipelines. Such predictions, therefore, must be accompanied by quantification of predictive uncertainty, especially when they are used for environmental risk assessment. We quantify the impact of parametric uncertainty on quantitative forecasting of temporal evolution of two key risk indices, volumes of unsaturated and saturated soil contaminated by a surface spill of light nonaqueous-phase liquids. This is accomplished by treating the relevant uncertain parameters as random variables and deploying two alternative probabilistic models to estimate their effect on predictive uncertainty. A physics-based model is solved with a stochastic collocation method and is supplemented by a global sensitivity analysis. A second model represents the quantities of interest as polynomials of random inputs and has a virtually negligible computational cost, which enables one to explore any number of risk-related contamination scenarios. For a typical oil-spill scenario, our method can be used to identify key flow and transport parameters affecting the risk indices, to elucidate texture-dependent behavior of different soils, and to evaluate, with a degree of confidence specified by the decision-maker, the extent of contamination and the correspondent remediation costs.

  16. Quantifying the uncertainties of a bottom-up emission inventory of anthropogenic atmospheric pollutants in China

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Nielsen, C. P.; Lei, Y.; McElroy, M. B.; Hao, J.

    2010-11-01

    The uncertainties of a national, bottom-up inventory of Chinese emissions of anthropogenic SO2, NOx, and particulate matter (PM) of different size classes and carbonaceous species are comprehensively quantified, for the first time, using Monte Carlo simulation. The inventory is structured by seven dominant sectors: coal-fired electric power, cement, iron and steel, other industry (boiler combustion), other industry (non-combustion processes), transportation, and residential. For each parameter related to emission factors or activity-level calculations, the uncertainties, represented as probability distributions, are either statistically fitted using results of domestic field tests or, when these are lacking, estimated based on foreign or other domestic data. The uncertainties (i.e., 95% confidence intervals around the central estimates) of Chinese emissions of SO2, NOx, total PM, PM10, PM2.5, black carbon (BC), and organic carbon (OC) in 2005 are estimated to be -14%~12%, -10%~36%, -10%~36%, -12%~42% -16%~52%, -23%~130%, and -37%~117%, respectively. Variations at activity levels (e.g., energy consumption or industrial production) are not the main source of emission uncertainties. Due to narrow classification of source types, large sample sizes, and relatively high data quality, the coal-fired power sector is estimated to have the smallest emission uncertainties for all species except BC and OC. Due to poorer source classifications and a wider range of estimated emission factors, considerable uncertainties of NOx and PM emissions from cement production and boiler combustion in other industries are found. The probability distributions of emission factors for biomass burning, the largest source of BC and OC, are fitted based on very limited domestic field measurements, and special caution should thus be taken interpreting these emission uncertainties. Although Monte Carlo simulation yields narrowed estimates of uncertainties compared to previous bottom-up emission studies, the results are not always consistent with those derived from satellite observations. The results thus represent an incremental research advance; while the analysis provides current estimates of uncertainty to researchers investigating Chinese and global atmospheric transport and chemistry, it also identifies specific needs in data collection and analysis to improve on them. Strengthened quantification of emissions of the included species and other, closely associated ones - notably CO2, generated largely by the same processes and thus subject to many of the same parameter uncertainties - is essential not only for science but for the design of policies to redress critical atmospheric environmental hazards at local, regional, and global scales.

  17. Quantifying the uncertainties of a bottom-up emission inventory of anthropogenic atmospheric pollutants in China

    NASA Astrophysics Data System (ADS)

    Zhao, Y.; Nielsen, C. P.; Lei, Y.; McElroy, M. B.; Hao, J.

    2011-03-01

    The uncertainties of a national, bottom-up inventory of Chinese emissions of anthropogenic SO2, NOx, and particulate matter (PM) of different size classes and carbonaceous species are comprehensively quantified, for the first time, using Monte Carlo simulation. The inventory is structured by seven dominant sectors: coal-fired electric power, cement, iron and steel, other industry (boiler combustion), other industry (non-combustion processes), transportation, and residential. For each parameter related to emission factors or activity-level calculations, the uncertainties, represented as probability distributions, are either statistically fitted using results of domestic field tests or, when these are lacking, estimated based on foreign or other domestic data. The uncertainties (i.e., 95% confidence intervals around the central estimates) of Chinese emissions of SO2, NOx, total PM, PM10, PM2.5, black carbon (BC), and organic carbon (OC) in 2005 are estimated to be -14%~13%, -13%~37%, -11%~38%, -14%~45%, -17%~54%, -25%~136%, and -40%~121%, respectively. Variations at activity levels (e.g., energy consumption or industrial production) are not the main source of emission uncertainties. Due to narrow classification of source types, large sample sizes, and relatively high data quality, the coal-fired power sector is estimated to have the smallest emission uncertainties for all species except BC and OC. Due to poorer source classifications and a wider range of estimated emission factors, considerable uncertainties of NOx and PM emissions from cement production and boiler combustion in other industries are found. The probability distributions of emission factors for biomass burning, the largest source of BC and OC, are fitted based on very limited domestic field measurements, and special caution should thus be taken interpreting these emission uncertainties. Although Monte Carlo simulation yields narrowed estimates of uncertainties compared to previous bottom-up emission studies, the results are not always consistent with those derived from satellite observations. The results thus represent an incremental research advance; while the analysis provides current estimates of uncertainty to researchers investigating Chinese and global atmospheric transport and chemistry, it also identifies specific needs in data collection and analysis to improve on them. Strengthened quantification of emissions of the included species and other, closely associated ones - notably CO2, generated largely by the same processes and thus subject to many of the same parameter uncertainties - is essential not only for science but for the design of policies to redress critical atmospheric environmental hazards at local, regional, and global scales.

  18. Assessing dry weather flow contribution in TSS and COD storm events loads in combined sewer systems.

    PubMed

    Métadier, M; Bertrand-Krajewski, J L

    2011-01-01

    Continuous high resolution long term turbidity measurements along with continuous discharge measurements are now recognised as an appropriate technique for the estimation of in sewer total suspended solids (TSS) and Chemical Oxygen Demand (COD) loads during storm events. In the combined system of the Ecully urban catchment (Lyon, France), this technique is implemented since 2003, with more than 200 storm events monitored. This paper presents a method for the estimation of the dry weather (DW) contribution to measured total TSS and COD event loads with special attention devoted to uncertainties assessment. The method accounts for the dynamics of both discharge and turbidity time series at two minutes time step. The study is based on 180 DW days monitored in 2007-2008. Three distinct classes of DW days were evidenced. Variability analysis and quantification showed that no seasonal effect and no trend over the year were detectable. The law of propagation of uncertainties is applicable for uncertainties estimation. The method has then been applied to all measured storm events. This study confirms the interest of long term continuous discharge and turbidity time series in sewer systems, especially in the perspective of wet weather quality modelling.

  19. Uncertainty Quantification in High Throughput Screening: Applications to Models of Endocrine Disruption, Cytotoxicity, and Zebrafish Development (GRC Drug Safety)

    EPA Science Inventory

    Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of bioche...

  20. Uncertainty Quantification and Certification Prediction of Low-Boom Supersonic Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Reuter, Bryan W.; Walker, Eric L.; Kleb, Bil; Park, Michael A.

    2014-01-01

    The primary objective of this work was to develop and demonstrate a process for accurate and efficient uncertainty quantification and certification prediction of low-boom, supersonic, transport aircraft. High-fidelity computational fluid dynamics models of multiple low-boom configurations were investigated including the Lockheed Martin SEEB-ALR body of revolution, the NASA 69 Delta Wing, and the Lockheed Martin 1021-01 configuration. A nonintrusive polynomial chaos surrogate modeling approach was used for reduced computational cost of propagating mixed, inherent (aleatory) and model-form (epistemic) uncertainty from both the computation fluid dynamics model and the near-field to ground level propagation model. A methodology has also been introduced to quantify the plausibility of a design to pass a certification under uncertainty. Results of this study include the analysis of each of the three configurations of interest under inviscid and fully turbulent flow assumptions. A comparison of the uncertainty outputs and sensitivity analyses between the configurations is also given. The results of this study illustrate the flexibility and robustness of the developed framework as a tool for uncertainty quantification and certification prediction of low-boom, supersonic aircraft.

  1. Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS

    DOE PAGES

    Brown, C. S.; Zhang, Hongbin

    2016-05-24

    Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less

  2. Multi-fidelity uncertainty quantification in large-scale predictive simulations of turbulent flow

    NASA Astrophysics Data System (ADS)

    Geraci, Gianluca; Jofre-Cruanyes, Lluis; Iaccarino, Gianluca

    2017-11-01

    The performance characterization of complex engineering systems often relies on accurate, but computationally intensive numerical simulations. It is also well recognized that in order to obtain a reliable numerical prediction the propagation of uncertainties needs to be included. Therefore, Uncertainty Quantification (UQ) plays a fundamental role in building confidence in predictive science. Despite the great improvement in recent years, even the more advanced UQ algorithms are still limited to fairly simplified applications and only moderate parameter dimensionality. Moreover, in the case of extremely large dimensionality, sampling methods, i.e. Monte Carlo (MC) based approaches, appear to be the only viable alternative. In this talk we describe and compare a family of approaches which aim to accelerate the convergence of standard MC simulations. These methods are based on hierarchies of generalized numerical resolutions (multi-level) or model fidelities (multi-fidelity), and attempt to leverage the correlation between Low- and High-Fidelity (HF) models to obtain a more accurate statistical estimator without introducing additional HF realizations. The performance of these methods are assessed on an irradiated particle laden turbulent flow (PSAAP II solar energy receiver). This investigation was funded by the United States Department of Energy's (DoE) National Nuclear Security Administration (NNSA) under the Predicitive Science Academic Alliance Program (PSAAP) II at Stanford University.

  3. Bayesian models for comparative analysis integrating phylogenetic uncertainty.

    PubMed

    de Villemereuil, Pierre; Wells, Jessie A; Edwards, Robert D; Blomberg, Simon P

    2012-06-28

    Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language.

  4. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    PubMed Central

    2012-01-01

    Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language. PMID:22741602

  5. A stochastic approach to uncertainty quantification in residual moveout analysis

    NASA Astrophysics Data System (ADS)

    Johng-Ay, T.; Landa, E.; Dossou-Gbété, S.; Bordes, L.

    2015-06-01

    Oil and gas exploration and production relies usually on the interpretation of a single seismic image, which is obtained from observed data. However, the statistical nature of seismic data and the various approximations and assumptions are sources of uncertainties which may corrupt the evaluation of parameters. The quantification of these uncertainties is a major issue which supposes to help in decisions that have important social and commercial implications. The residual moveout analysis, which is an important step in seismic data processing is usually performed by a deterministic approach. In this paper we discuss a Bayesian approach to the uncertainty analysis.

  6. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    NASA Astrophysics Data System (ADS)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  7. Quantifying uncertainty on sediment loads using bootstrap confidence intervals

    NASA Astrophysics Data System (ADS)

    Slaets, Johanna I. F.; Piepho, Hans-Peter; Schmitter, Petra; Hilger, Thomas; Cadisch, Georg

    2017-01-01

    Load estimates are more informative than constituent concentrations alone, as they allow quantification of on- and off-site impacts of environmental processes concerning pollutants, nutrients and sediment, such as soil fertility loss, reservoir sedimentation and irrigation channel siltation. While statistical models used to predict constituent concentrations have been developed considerably over the last few years, measures of uncertainty on constituent loads are rarely reported. Loads are the product of two predictions, constituent concentration and discharge, integrated over a time period, which does not make it straightforward to produce a standard error or a confidence interval. In this paper, a linear mixed model is used to estimate sediment concentrations. A bootstrap method is then developed that accounts for the uncertainty in the concentration and discharge predictions, allowing temporal correlation in the constituent data, and can be used when data transformations are required. The method was tested for a small watershed in Northwest Vietnam for the period 2010-2011. The results showed that confidence intervals were asymmetric, with the highest uncertainty in the upper limit, and that a load of 6262 Mg year-1 had a 95 % confidence interval of (4331, 12 267) in 2010 and a load of 5543 Mg an interval of (3593, 8975) in 2011. Additionally, the approach demonstrated that direct estimates from the data were biased downwards compared to bootstrap median estimates. These results imply that constituent loads predicted from regression-type water quality models could frequently be underestimating sediment yields and their environmental impact.

  8. Uncertainty Quantification in Multi-Scale Coronary Simulations Using Multi-resolution Expansion

    NASA Astrophysics Data System (ADS)

    Tran, Justin; Schiavazzi, Daniele; Ramachandra, Abhay; Kahn, Andrew; Marsden, Alison

    2016-11-01

    Computational simulations of coronary flow can provide non-invasive information on hemodynamics that can aid in surgical planning and research on disease propagation. In this study, patient-specific geometries of the aorta and coronary arteries are constructed from CT imaging data and finite element flow simulations are carried out using the open source software SimVascular. Lumped parameter networks (LPN), consisting of circuit representations of vascular hemodynamics and coronary physiology, are used as coupled boundary conditions for the solver. The outputs of these simulations depend on a set of clinically-derived input parameters that define the geometry and boundary conditions, however their values are subjected to uncertainty. We quantify the effects of uncertainty from two sources: uncertainty in the material properties of the vessel wall and uncertainty in the lumped parameter models whose values are estimated by assimilating patient-specific clinical and literature data. We use a generalized multi-resolution chaos approach to propagate the uncertainty. The advantages of this approach lies in its ability to support inputs sampled from arbitrary distributions and its built-in adaptivity that efficiently approximates stochastic responses characterized by steep gradients.

  9. Uncertainty quantification of crustal scale thermo-chemical properties in Southeast Australia

    NASA Astrophysics Data System (ADS)

    Mather, B.; Moresi, L. N.; Rayner, P. J.

    2017-12-01

    The thermo-chemical properties of the crust are essential to understanding the mechanical and thermal state of the lithosphere. The uncertainties associated with these parameters are connected to the available geophysical observations and a priori information to constrain the objective function. Often, it is computationally efficient to reduce the parameter space by mapping large portions of the crust into lithologies that have assumed homogeneity. However, the boundaries of these lithologies are, in themselves, uncertain and should also be included in the inverse problem. We assimilate geological uncertainties from an a priori geological model of Southeast Australia with geophysical uncertainties from S-wave tomography and 174 heat flow observations within an adjoint inversion framework. This reduces the computational cost of inverting high dimensional probability spaces, compared to probabilistic inversion techniques that operate in the `forward' mode, but at the sacrifice of uncertainty and covariance information. We overcome this restriction using a sensitivity analysis, that perturbs our observations and a priori information within their probability distributions, to estimate the posterior uncertainty of thermo-chemical parameters in the crust.

  10. Model Uncertainty Quantification Methods In Data Assimilation

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; Marshall, L. A.; Sharma, A.; Moradkhani, H.

    2017-12-01

    Data Assimilation involves utilising observations to improve model predictions in a seamless and statistically optimal fashion. Its applications are wide-ranging; from improving weather forecasts to tracking targets such as in the Apollo 11 mission. The use of Data Assimilation methods in high dimensional complex geophysical systems is an active area of research, where there exists many opportunities to enhance existing methodologies. One of the central challenges is in model uncertainty quantification; the outcome of any Data Assimilation study is strongly dependent on the uncertainties assigned to both observations and models. I focus on developing improved model uncertainty quantification methods that are applicable to challenging real world scenarios. These include developing methods for cases where the system states are only partially observed, where there is little prior knowledge of the model errors, and where the model error statistics are likely to be highly non-Gaussian.

  11. A framework for optimization and quantification of uncertainty and sensitivity for developing carbon capture systems

    DOE PAGES

    Eslick, John C.; Ng, Brenda; Gao, Qianwen; ...

    2014-12-31

    Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less

  12. Nuclear Data Uncertainty Quantification: Past, Present and Future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, D. L.

    2015-01-01

    An historical overview is provided of the mathematical foundations of uncertainty quantification and the roles played in the more recent past by nuclear data uncertainties in nuclear data evaluations and nuclear applications. Significant advances that have established the mathematical framework for contemporary nuclear data evaluation methods, as well as the use of uncertainty information in nuclear data evaluation and nuclear applications, are described. This is followed by a brief examination of the current status concerning nuclear data evaluation methodology, covariance data generation, and the application of evaluated nuclear data uncertainties in contemporary nuclear technology. A few possible areas for futuremore » investigation of this subject are also suggested.« less

  13. Nuclear Data Uncertainty Quantification: Past, Present and Future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, D.L., E-mail: Donald.L.Smith@anl.gov

    2015-01-15

    An historical overview is provided of the mathematical foundations of uncertainty quantification and the roles played in the more recent past by nuclear data uncertainties in nuclear data evaluations and nuclear applications. Significant advances that have established the mathematical framework for contemporary nuclear data evaluation methods, as well as the use of uncertainty information in nuclear data evaluation and nuclear applications, are described. This is followed by a brief examination of the current status concerning nuclear data evaluation methodology, covariance data generation, and the application of evaluated nuclear data uncertainties in contemporary nuclear technology. A few possible areas for futuremore » investigation of this subject are also suggested.« less

  14. Development and Implementation of a Formal Framework for Bottom-up Uncertainty Analysis of Input Emissions: Case Study of Residential Wood Combustion

    NASA Astrophysics Data System (ADS)

    Zhao, S.; Mashayekhi, R.; Saeednooran, S.; Hakami, A.; Ménard, R.; Moran, M. D.; Zhang, J.

    2016-12-01

    We have developed a formal framework for documentation, quantification, and propagation of uncertainties in upstream emissions inventory data at various stages leading to the generation of model-ready gridded emissions through emissions processing software such as the EPA's SMOKE (Sparse Matrix Operator Kernel Emissions) system. To illustrate this framework we present a proof-of-concept case study of a bottom-up quantitative assessment of uncertainties in emissions from residential wood combustion (RWC) in the U.S. and Canada. Uncertainties associated with key inventory parameters are characterized based on existing information sources, including the American Housing Survey (AHS) from the U.S. Census Bureau, Timber Products Output (TPO) surveys from the U.S. Forest Service, TNS Canadian Facts surveys, and the AP-42 emission factor document from the U.S. EPA. The propagation of uncertainties is based on Monte Carlo simulation code external to SMOKE. Latin Hypercube Sampling (LHS) is implemented to generate a set of random realizations of each RWC inventory parameter, for which the uncertainties are assumed to be normally distributed. Random realizations are also obtained for each RWC temporal and chemical speciation profile and spatial surrogate field external to SMOKE using the LHS approach. SMOKE outputs for primary emissions (e.g., CO, VOC) using both RWC emission inventory realizations and perturbed temporal and chemical profiles and spatial surrogates show relative uncertainties of about 30-50% across the U.S. and about 70-100% across Canada. Positive skewness values (up to 2.7) and variable kurtosis values (up to 4.8) were also found. Spatial allocation contributes significantly to the overall uncertainty, particularly in Canada. By applying this framework we are able to produce random realizations of model-ready gridded emissions that along with available meteorological ensembles can be used to propagate uncertainties through chemical transport models. The approach described here provides an effective means for formal quantification of uncertainties in estimated emissions from various source sectors and for continuous documentation, assessment, and reduction of emission uncertainties.

  15. Estimating the reproductive number, total outbreak size, and reporting rates for Zika epidemics in South and Central America.

    PubMed

    Shutt, Deborah P; Manore, Carrie A; Pankavich, Stephen; Porter, Aaron T; Del Valle, Sara Y

    2017-12-01

    As South and Central American countries prepare for increased birth defects from Zika virus outbreaks and plan for mitigation strategies to minimize ongoing and future outbreaks, understanding important characteristics of Zika outbreaks and how they vary across regions is a challenging and important problem. We developed a mathematical model for the 2015/2016 Zika virus outbreak dynamics in Colombia, El Salvador, and Suriname. We fit the model to publicly available data provided by the Pan American Health Organization, using Approximate Bayesian Computation to estimate parameter distributions and provide uncertainty quantification. The model indicated that a country-level analysis was not appropriate for Colombia. We then estimated the basic reproduction number to range between 4 and 6 for El Salvador and Suriname with a median of 4.3 and 5.3, respectively. We estimated the reporting rate to be around 16% in El Salvador and 18% in Suriname with estimated total outbreak sizes of 73,395 and 21,647 people, respectively. The uncertainty in parameter estimates highlights a need for research and data collection that will better constrain parameter ranges. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Estimating the reproductive number, total outbreak size, and reporting rates for Zika epidemics in South and Central America

    DOE PAGES

    Shutt, Deborah P.; Manore, Carrie A.; Pankavich, Stephen; ...

    2017-07-13

    As South and Central American countries prepare for increased birth defects from Zika virus outbreaks and plan for mitigation strategies to minimize ongoing and future outbreaks, understanding important characteristics of Zika outbreaks and how they vary across regions is a challenging and important problem. We developed a mathematical model for the 2015/2016 Zika virus outbreak dynamics in Colombia, El Salvador, and Suriname. We fit the model to publicly available data provided by the Pan American Health Organization, using Approximate Bayesian Computation to estimate parameter distributions and provide uncertainty quantification. The model indicated that a country-level analysis was not appropriate formore » Colombia. We then estimated the basic reproduction number to range between 4 and 6 for El Salvador and Suriname with a median of 4.3 and 5.3, respectively. We estimated the reporting rate to be around 16% in El Salvador and 18% in Suriname with estimated total outbreak sizes of 73,395 and 21,647 people, respectively. The uncertainty in parameter estimates highlights a need for research and data collection that will better constrain parameter ranges.« less

  17. Estimating the reproductive number, total outbreak size, and reporting rates for Zika epidemics in South and Central America

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shutt, Deborah P.; Manore, Carrie A.; Pankavich, Stephen

    As South and Central American countries prepare for increased birth defects from Zika virus outbreaks and plan for mitigation strategies to minimize ongoing and future outbreaks, understanding important characteristics of Zika outbreaks and how they vary across regions is a challenging and important problem. We developed a mathematical model for the 2015/2016 Zika virus outbreak dynamics in Colombia, El Salvador, and Suriname. We fit the model to publicly available data provided by the Pan American Health Organization, using Approximate Bayesian Computation to estimate parameter distributions and provide uncertainty quantification. The model indicated that a country-level analysis was not appropriate formore » Colombia. We then estimated the basic reproduction number to range between 4 and 6 for El Salvador and Suriname with a median of 4.3 and 5.3, respectively. We estimated the reporting rate to be around 16% in El Salvador and 18% in Suriname with estimated total outbreak sizes of 73,395 and 21,647 people, respectively. The uncertainty in parameter estimates highlights a need for research and data collection that will better constrain parameter ranges.« less

  18. Non-parametric correlative uncertainty quantification and sensitivity analysis: Application to a Langmuir bimolecular adsorption model

    NASA Astrophysics Data System (ADS)

    Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.

    2018-03-01

    We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.

  19. The two-dimensional Monte Carlo: a new methodologic paradigm for dose reconstruction for epidemiological studies.

    PubMed

    Simon, Steven L; Hoffman, F Owen; Hofer, Eduard

    2015-01-01

    Retrospective dose estimation, particularly dose reconstruction that supports epidemiological investigations of health risk, relies on various strategies that include models of physical processes and exposure conditions with detail ranging from simple to complex. Quantification of dose uncertainty is an essential component of assessments for health risk studies since, as is well understood, it is impossible to retrospectively determine the true dose for each person. To address uncertainty in dose estimation, numerical simulation tools have become commonplace and there is now an increased understanding about the needs and what is required for models used to estimate cohort doses (in the absence of direct measurement) to evaluate dose response. It now appears that for dose-response algorithms to derive the best, unbiased estimate of health risk, we need to understand the type, magnitude and interrelationships of the uncertainties of model assumptions, parameters and input data used in the associated dose estimation models. Heretofore, uncertainty analysis of dose estimates did not always properly distinguish between categories of errors, e.g., uncertainty that is specific to each subject (i.e., unshared error), and uncertainty of doses from a lack of understanding and knowledge about parameter values that are shared to varying degrees by numbers of subsets of the cohort. While mathematical propagation of errors by Monte Carlo simulation methods has been used for years to estimate the uncertainty of an individual subject's dose, it was almost always conducted without consideration of dependencies between subjects. In retrospect, these types of simple analyses are not suitable for studies with complex dose models, particularly when important input data are missing or otherwise not available. The dose estimation strategy presented here is a simulation method that corrects the previous deficiencies of analytical or simple Monte Carlo error propagation methods and is termed, due to its capability to maintain separation between shared and unshared errors, the two-dimensional Monte Carlo (2DMC) procedure. Simply put, the 2DMC method simulates alternative, possibly true, sets (or vectors) of doses for an entire cohort rather than a single set that emerges when each individual's dose is estimated independently from other subjects. Moreover, estimated doses within each simulated vector maintain proper inter-relationships such that the estimated doses for members of a cohort subgroup that share common lifestyle attributes and sources of uncertainty are properly correlated. The 2DMC procedure simulates inter-individual variability of possibly true doses within each dose vector and captures the influence of uncertainty in the values of dosimetric parameters across multiple realizations of possibly true vectors of cohort doses. The primary characteristic of the 2DMC approach, as well as its strength, are defined by the proper separation between uncertainties shared by members of the entire cohort or members of defined cohort subsets, and uncertainties that are individual-specific and therefore unshared.

  20. Uncertainty quantification for nuclear density functional theory and information content of new measurements

    DOE PAGES

    McDonnell, J. D.; Schunck, N.; Higdon, D.; ...

    2015-03-24

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. In addition, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less

  1. Uncertainty quantification for nuclear density functional theory and information content of new measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonnell, J. D.; Schunck, N.; Higdon, D.

    2015-03-24

    Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squaresmore » optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. As a result, the example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.« less

  2. Uncertainty Quantification of Turbulence Model Closure Coefficients for Transonic Wall-Bounded Flows

    NASA Technical Reports Server (NTRS)

    Schaefer, John; West, Thomas; Hosder, Serhat; Rumsey, Christopher; Carlson, Jan-Renee; Kleb, William

    2015-01-01

    The goal of this work was to quantify the uncertainty and sensitivity of commonly used turbulence models in Reynolds-Averaged Navier-Stokes codes due to uncertainty in the values of closure coefficients for transonic, wall-bounded flows and to rank the contribution of each coefficient to uncertainty in various output flow quantities of interest. Specifically, uncertainty quantification of turbulence model closure coefficients was performed for transonic flow over an axisymmetric bump at zero degrees angle of attack and the RAE 2822 transonic airfoil at a lift coefficient of 0.744. Three turbulence models were considered: the Spalart-Allmaras Model, Wilcox (2006) k-w Model, and the Menter Shear-Stress Trans- port Model. The FUN3D code developed by NASA Langley Research Center was used as the flow solver. The uncertainty quantification analysis employed stochastic expansions based on non-intrusive polynomial chaos as an efficient means of uncertainty propagation. Several integrated and point-quantities are considered as uncertain outputs for both CFD problems. All closure coefficients were treated as epistemic uncertain variables represented with intervals. Sobol indices were used to rank the relative contributions of each closure coefficient to the total uncertainty in the output quantities of interest. This study identified a number of closure coefficients for each turbulence model for which more information will reduce the amount of uncertainty in the output significantly for transonic, wall-bounded flows.

  3. CXTFIT/Excel A modular adaptable code for parameter estimation, sensitivity analysis and uncertainty analysis for laboratory or field tracer experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; Mayes, Melanie; Parker, Jack C

    2010-01-01

    We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less

  4. Quantification of uncertainty in first-principles predicted mechanical properties of solids: Application to solid ion conductors

    NASA Astrophysics Data System (ADS)

    Ahmad, Zeeshan; Viswanathan, Venkatasubramanian

    2016-08-01

    Computationally-guided material discovery is being increasingly employed using a descriptor-based screening through the calculation of a few properties of interest. A precise understanding of the uncertainty associated with first-principles density functional theory calculated property values is important for the success of descriptor-based screening. The Bayesian error estimation approach has been built in to several recently developed exchange-correlation functionals, which allows an estimate of the uncertainty associated with properties related to the ground state energy, for example, adsorption energies. Here, we propose a robust and computationally efficient method for quantifying uncertainty in mechanical properties, which depend on the derivatives of the energy. The procedure involves calculating energies around the equilibrium cell volume with different strains and fitting the obtained energies to the corresponding energy-strain relationship. At each strain, we use instead of a single energy, an ensemble of energies, giving us an ensemble of fits and thereby, an ensemble of mechanical properties associated with each fit, whose spread can be used to quantify its uncertainty. The generation of ensemble of energies is only a post-processing step involving a perturbation of parameters of the exchange-correlation functional and solving for the energy non-self-consistently. The proposed method is computationally efficient and provides a more robust uncertainty estimate compared to the approach of self-consistent calculations employing several different exchange-correlation functionals. We demonstrate the method by calculating the uncertainty bounds for several materials belonging to different classes and having different structures using the developed method. We show that the calculated uncertainty bounds the property values obtained using three different GGA functionals: PBE, PBEsol, and RPBE. Finally, we apply the approach to calculate the uncertainty associated with the DFT-calculated elastic properties of solid state Li-ion and Na-ion conductors.

  5. SU-D-303-03: Impact of Uncertainty in T1 Measurements On Quantification of Dynamic Contrast Enhanced MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aryal, M; Cao, Y

    2015-06-15

    Purpose: Quantification of dynamic contrast enhanced (DCE) MRI requires native longitudinal relaxation time (T1) measurement. This study aimed to assess uncertainty in T1 measurements using two different methods. Methods and Materials: Brain MRI scans were performed on a 3T scanner in 9 patients who had low grade/benign tumors and partial brain radiotherapy without chemotherapy at pre-RT, week-3 during RT (wk-3), end-RT, and 1, 6 and 18 months after RT. T1-weighted images were acquired using gradient echo sequences with 1) 2 different flip angles (50 and 150), and 2) 5 variable TRs (100–2000ms). After creating quantitative T1 maps, average T1 wasmore » calculated in regions of interest (ROI), which were distant from tumors and received a total of accumulated radiation doses < 5 Gy at wk-3. ROIs included left and right normal Putamen and Thalamus (gray matter: GM), and frontal and parietal white matter (WM). Since there were no significant or even a trend of T1 changes from pre-RT to wk-3 in these ROIs, a relative repeatability coefficient (RC) of T1 as a measure of uncertainty was estimated in each ROI using the data pre-RT and at wk-3. The individual T1 changes at later time points were evaluated compared to the estimated RCs. Results: The 2-flip angle method produced small RCs in GM (9.7–11.7%) but large RCs in WM (12.2–13.6%) compared to the saturation-recovery (SR) method (11.0–17.7% for GM and 7.5–11.2% for WM). More than 81% of individual T1 changes were within T1 uncertainty ranges defined by RCs. Conclusion: Our study suggests that the impact of T1 uncertainty on physiological parameters derived from DCE MRI is not negligible. A short scan with 2 flip angles is able to achieve repeatability of T1 estimates similar to a long scan with 5 different TRs, and is desirable to be integrated in the DCE protocol. Present study was supported by National Institute of Health (NIH) under grant numbers; UO1 CA183848 and RO1 NS064973.« less

  6. UQTools: The Uncertainty Quantification Toolbox - Introduction and Tutorial

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Crespo, Luis G.; Giesy, Daniel P.

    2012-01-01

    UQTools is the short name for the Uncertainty Quantification Toolbox, a software package designed to efficiently quantify the impact of parametric uncertainty on engineering systems. UQTools is a MATLAB-based software package and was designed to be discipline independent, employing very generic representations of the system models and uncertainty. Specifically, UQTools accepts linear and nonlinear system models and permits arbitrary functional dependencies between the system s measures of interest and the probabilistic or non-probabilistic parametric uncertainty. One of the most significant features incorporated into UQTools is the theoretical development centered on homothetic deformations and their application to set bounding and approximating failure probabilities. Beyond the set bounding technique, UQTools provides a wide range of probabilistic and uncertainty-based tools to solve key problems in science and engineering.

  7. Numerical Error Estimation with UQ

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Korn, Peter; Marotzke, Jochem

    2014-05-01

    Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted

  8. Systematic influences of gamma-ray spectrometry data near the decision threshold for radioactivity measurements in the environment.

    PubMed

    Zorko, Benjamin; Korun, Matjaž; Mora Canadas, Juan Carlos; Nicoulaud-Gouin, Valerie; Chyly, Pavol; Blixt Buhr, Anna Maria; Lager, Charlotte; Aquilonius, Karin; Krajewski, Pawel

    2016-07-01

    Several methods for reporting outcomes of gamma-ray spectrometric measurements of environmental samples for dose calculations are presented and discussed. The measurement outcomes can be reported as primary measurement results, primary measurement results modified according to the quantification limit, best estimates obtained by the Bayesian posterior (ISO 11929), best estimates obtained by the probability density distribution resembling shifting, and the procedure recommended by the European Commission (EC). The annual dose is calculated from the arithmetic average using any of these five procedures. It was shown that the primary measurement results modified according to the quantification limit could lead to an underestimation of the annual dose. On the other hand the best estimates lead to an overestimation of the annual dose. The annual doses calculated from the measurement outcomes obtained according to the EC's recommended procedure, which does not cope with the uncertainties, fluctuate between an under- and overestimation, depending on the frequency of the measurement results that are larger than the limit of detection. In the extreme case, when no measurement results above the detection limit occur, the average over primary measurement results modified according to the quantification limit underestimates the average over primary measurement results for about 80%. The average over best estimates calculated according the procedure resembling shifting overestimates the average over primary measurement results for 35%, the average obtained by the Bayesian posterior for 85% and the treatment according to the EC recommendation for 89%. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Uncertainty Quantification in Climate Modeling and Projection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qian, Yun; Jackson, Charles; Giorgi, Filippo

    The projection of future climate is one of the most complex problems undertaken by the scientific community. Although scientists have been striving to better understand the physical basis of the climate system and to improve climate models, the overall uncertainty in projections of future climate has not been significantly reduced (e.g., from the IPCC AR4 to AR5). With the rapid increase of complexity in Earth system models, reducing uncertainties in climate projections becomes extremely challenging. Since uncertainties always exist in climate models, interpreting the strengths and limitations of future climate projections is key to evaluating risks, and climate change informationmore » for use in Vulnerability, Impact, and Adaptation (VIA) studies should be provided with both well-characterized and well-quantified uncertainty. The workshop aimed at providing participants, many of them from developing countries, information on strategies to quantify the uncertainty in climate model projections and assess the reliability of climate change information for decision-making. The program included a mixture of lectures on fundamental concepts in Bayesian inference and sampling, applications, and hands-on computer laboratory exercises employing software packages for Bayesian inference, Markov Chain Monte Carlo methods, and global sensitivity analyses. The lectures covered a range of scientific issues underlying the evaluation of uncertainties in climate projections, such as the effects of uncertain initial and boundary conditions, uncertain physics, and limitations of observational records. Progress in quantitatively estimating uncertainties in hydrologic, land surface, and atmospheric models at both regional and global scales was also reviewed. The application of Uncertainty Quantification (UQ) concepts to coupled climate system models is still in its infancy. The Coupled Model Intercomparison Project (CMIP) multi-model ensemble currently represents the primary data for assessing reliability and uncertainties of climate change information. An alternative approach is to generate similar ensembles by perturbing parameters within a single-model framework. One of workshop’s objectives was to give participants a deeper understanding of these approaches within a Bayesian statistical framework. However, there remain significant challenges still to be resolved before UQ can be applied in a convincing way to climate models and their projections.« less

  10. Determination of lipophilic marine toxins in mussels. Quantification and confirmation criteria using high resolution mass spectrometry.

    PubMed

    Domènech, Albert; Cortés-Francisco, Nuria; Palacios, Oscar; Franco, José M; Riobó, Pilar; Llerena, José J; Vichi, Stefania; Caixach, Josep

    2014-02-07

    A multitoxin method has been developed for quantification and confirmation of lipophilic marine biotoxins in mussels by liquid chromatography coupled to high resolution mass spectrometry (HRMS), using an Orbitrap-Exactive HCD mass spectrometer. Okadaic acid (OA), yessotoxin, azaspiracid-1, gymnodimine, 13-desmethyl spirolide C, pectenotoxin-2 and Brevetoxin B were analyzed as representative compounds of each lipophilic toxin group. HRMS identification and confirmation criteria were established. Fragment and isotope ions and ion ratios were studied and evaluated for confirmation purpose. In depth characterization of full scan and fragmentation spectrum of the main toxins were carried out. Accuracy (trueness and precision), linearity, calibration curve check, limit of quantification (LOQ) and specificity were the parameters established for the method validation. The validation was performed at 0.5 times the current European Union permitted levels. The method performed very well for the parameters investigated. The trueness, expressed as recovery, ranged from 80% to 94%, the precision, expressed as intralaboratory reproducibility, ranged from 5% to 22% and the LOQs range from 0.9 to 4.8pg on column. Uncertainty of the method was also estimated for OA, using a certified reference material. A top-down approach considering two main contributions: those arising from the trueness studies and those coming from the precision's determination, was used. An overall expanded uncertainty of 38% was obtained. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. The Prediction-Focused Approach: An opportunity for hydrogeophysical data integration and interpretation

    NASA Astrophysics Data System (ADS)

    Hermans, Thomas; Nguyen, Frédéric; Klepikova, Maria; Dassargues, Alain; Caers, Jef

    2017-04-01

    Hydrogeophysics is an interdisciplinary field of sciences aiming at a better understanding of subsurface hydrological processes. If geophysical surveys have been successfully used to qualitatively characterize the subsurface, two important challenges remain for a better quantification of hydrological processes: (1) the inversion of geophysical data and (2) their integration in hydrological subsurface models. The classical inversion approach using regularization suffers from spatially and temporally varying resolution and yields geologically unrealistic solutions without uncertainty quantification, making their utilization for hydrogeological calibration less consistent. More advanced techniques such as coupled inversion allow for a direct use of geophysical data for conditioning groundwater and solute transport model calibration. However, the technique is difficult to apply in complex cases and remains computationally demanding to estimate uncertainty. In a recent study, we investigate a prediction-focused approach (PFA) to directly estimate subsurface physical properties from geophysical data, circumventing the need for classic inversions. In PFA, we seek a direct relationship between the data and the subsurface variables we want to predict (the forecast). This relationship is obtained through a prior set of subsurface models for which both data and forecast are computed. A direct relationship can often be derived through dimension reduction techniques. PFA offers a framework for both hydrogeophysical "inversion" and hydrogeophysical data integration. For hydrogeophysical "inversion", the considered forecast variable is the subsurface variable, such as the salinity. An ensemble of possible solutions is generated, allowing uncertainty quantification. For hydrogeophysical data integration, the forecast variable becomes the prediction we want to make with our subsurface models, such as the concentration of contaminant in a drinking water production well. Geophysical and hydrological data are combined to derive a direct relationship between data and forecast. We illustrate the process for the design of an aquifer thermal energy storage (ATES) system. An ATES system can theoretically recover in winter the heat stored in the aquifer during summer. In practice, the energy efficiency is often lower than expected due to spatial heterogeneity of hydraulic properties combined to a non-favorable hydrogeological gradient. A proper design of ATES systems should consider the uncertainty of the prediction related to those parameters. With a global sensitivity analysis, we identify sensitive parameters for heat storage prediction and validate the use of a short term heat tracing experiment monitored with geophysics to generate informative data. First, we illustrate how PFA can be used to successfully derive the distribution of temperature in the aquifer from ERT during the heat tracing experiment. Then, we successfully integrate the geophysical data to predict medium-term heat storage in the aquifer using PFA. The result is a full quantification of the posterior distribution of the prediction conditioned to observed data in a relatively limited time budget.

  12. A python framework for environmental model uncertainty analysis

    USGS Publications Warehouse

    White, Jeremy; Fienen, Michael N.; Doherty, John E.

    2016-01-01

    We have developed pyEMU, a python framework for Environmental Modeling Uncertainty analyses, open-source tool that is non-intrusive, easy-to-use, computationally efficient, and scalable to highly-parameterized inverse problems. The framework implements several types of linear (first-order, second-moment (FOSM)) and non-linear uncertainty analyses. The FOSM-based analyses can also be completed prior to parameter estimation to help inform important modeling decisions, such as parameterization and objective function formulation. Complete workflows for several types of FOSM-based and non-linear analyses are documented in example notebooks implemented using Jupyter that are available in the online pyEMU repository. Example workflows include basic parameter and forecast analyses, data worth analyses, and error-variance analyses, as well as usage of parameter ensemble generation and management capabilities. These workflows document the necessary steps and provides insights into the results, with the goal of educating users not only in how to apply pyEMU, but also in the underlying theory of applied uncertainty quantification.

  13. From cutting-edge pointwise cross-section to groupwise reaction rate: A primer

    NASA Astrophysics Data System (ADS)

    Sublet, Jean-Christophe; Fleming, Michael; Gilbert, Mark R.

    2017-09-01

    The nuclear research and development community has a history of using both integral and differential experiments to support accurate lattice-reactor, nuclear reactor criticality and shielding simulations, as well as verification and validation efforts of cross sections and emitted particle spectra. An important aspect to this type of analysis is the proper consideration of the contribution of the neutron spectrum in its entirety, with correct propagation of uncertainties and standard deviations derived from Monte Carlo simulations, to the local and total uncertainty in the simulated reactions rates (RRs), which usually only apply to one application at a time. This paper identifies deficiencies in the traditional treatment, and discusses correct handling of the RR uncertainty quantification and propagation, including details of the cross section components in the RR uncertainty estimates, which are verified for relevant applications. The methodology that rigorously captures the spectral shift and cross section contributions to the uncertainty in the RR are discussed with quantified examples that demonstrate the importance of the proper treatment of the spectrum profile and cross section contributions to the uncertainty in the RR and subsequent response functions. The recently developed inventory code FISPACT-II, when connected to the processed nuclear data libraries TENDL-2015, ENDF/B-VII.1, JENDL-4.0u or JEFF-3.2, forms an enhanced multi-physics platform providing a wide variety of advanced simulation methods for modelling activation, transmutation, burnup protocols and simulating radiation damage sources terms. The system has extended cutting-edge nuclear data forms, uncertainty quantification and propagation methods, which have been the subject of recent integral and differential, fission, fusion and accelerators validation efforts. The simulation system is used to accurately and predictively probe, understand and underpin a modern and sustainable understanding of the nuclear physics that is so important for many areas of science and technology; advanced fission and fuel systems, magnetic and inertial confinement fusion, high energy, accelerator physics, medical application, isotope production, earth exploration, astrophysics and homeland security.

  14. Sensitivity Analysis and Uncertainty Quantification for the LAMMPS Molecular Dynamics Simulation Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Picard, Richard Roy; Bhat, Kabekode Ghanasham

    2017-07-18

    We examine sensitivity analysis and uncertainty quantification for molecular dynamics simulation. Extreme (large or small) output values for the LAMMPS code often occur at the boundaries of input regions, and uncertainties in those boundary values are overlooked by common SA methods. Similarly, input values for which code outputs are consistent with calibration data can also occur near boundaries. Upon applying approaches in the literature for imprecise probabilities (IPs), much more realistic results are obtained than for the complacent application of standard SA and code calibration.

  15. Quantification of CO2 generation in sedimentary basins through carbonate/clays reactions with uncertain thermodynamic parameters

    NASA Astrophysics Data System (ADS)

    Ceriotti, G.; Porta, G. M.; Geloni, C.; Dalla Rosa, M.; Guadagnini, A.

    2017-09-01

    We develop a methodological framework and mathematical formulation which yields estimates of the uncertainty associated with the amounts of CO2 generated by Carbonate-Clays Reactions (CCR) in large-scale subsurface systems to assist characterization of the main features of this geochemical process. Our approach couples a one-dimensional compaction model, providing the dynamics of the evolution of porosity, temperature and pressure along the vertical direction, with a chemical model able to quantify the partial pressure of CO2 resulting from minerals and pore water interaction. The modeling framework we propose allows (i) estimating the depth at which the source of gases is located and (ii) quantifying the amount of CO2 generated, based on the mineralogy of the sediments involved in the basin formation process. A distinctive objective of the study is the quantification of the way the uncertainty affecting chemical equilibrium constants propagates to model outputs, i.e., the flux of CO2. These parameters are considered as key sources of uncertainty in our modeling approach because temperature and pressure distributions associated with deep burial depths typically fall outside the range of validity of commonly employed geochemical databases and typically used geochemical software. We also analyze the impact of the relative abundancy of primary phases in the sediments on the activation of CCR processes. As a test bed, we consider a computational study where pressure and temperature conditions are representative of those observed in real sedimentary formation. Our results are conducive to the probabilistic assessment of (i) the characteristic pressure and temperature at which CCR leads to generation of CO2 in sedimentary systems, (ii) the order of magnitude of the CO2 generation rate that can be associated with CCR processes.

  16. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less

  17. Reducing uncertainty in estimating virus reduction by advanced water treatment processes.

    PubMed

    Gerba, Charles P; Betancourt, Walter Q; Kitajima, Masaaki; Rock, Channah M

    2018-04-15

    Treatment of wastewater for potable reuse requires the reduction of enteric viruses to levels that pose no significant risk to human health. Advanced water treatment trains (e.g., chemical clarification, reverse osmosis, ultrafiltration, advanced oxidation) have been developed to provide reductions of viruses to differing levels of regulatory control depending upon the levels of human exposure and associated health risks. Importance in any assessment is information on the concentration and types of viruses in the untreated wastewater, as well as the degree of removal by each treatment process. However, it is critical that the uncertainty associated with virus concentration and removal or inactivation by wastewater treatment be understood to improve these estimates and identifying research needs. We reviewed the critically literature to assess to identify uncertainty in these estimates. Biological diversity within families and genera of viruses (e.g. enteroviruses, rotaviruses, adenoviruses, reoviruses, noroviruses) and specific virus types (e.g. serotypes or genotypes) creates the greatest uncertainty. These aspects affect the methods for detection and quantification of viruses and anticipated removal efficiency by treatment processes. Approaches to reduce uncertainty may include; 1) inclusion of a virus indicator for assessing efficiency of virus concentration and detection by molecular methods for each sample, 2) use of viruses most resistant to individual treatment processes (e.g. adenoviruses for UV light disinfection and reoviruses for chlorination), 3) data on ratio of virion or genome copies to infectivity in untreated wastewater, and 4) assessment of virus removal at field scale treatment systems to verify laboratory and pilot plant data for virus removal. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. A Framework for Assessing Uncertainty Associated with Human Health Risks from MSW Landfill Leachate Contamination.

    PubMed

    Mishra, Harshit; Karmakar, Subhankar; Kumar, Rakesh; Singh, Jitendra

    2017-07-01

    Landfilling is a cost-effective method, which makes it a widely used practice around the world, especially in developing countries. However, because of the improper management of landfills, high leachate leakage can have adverse impacts on soils, plants, groundwater, aquatic organisms, and, subsequently, human health. A comprehensive survey of the literature finds that the probabilistic quantification of uncertainty based on estimations of the human health risks due to landfill leachate contamination has rarely been reported. Hence, in the present study, the uncertainty about the human health risks from municipal solid waste landfill leachate contamination to children and adults was quantified to investigate its long-term risks by using a Monte Carlo simulation framework for selected heavy metals. The Turbhe sanitary landfill of Navi Mumbai, India, which was commissioned in the recent past, was selected to understand the fate and transport of heavy metals in leachate. A large residential area is located near the site, which makes the risk assessment problem both crucial and challenging. In this article, an integral approach in the form of a framework has been proposed to quantify the uncertainty that is intrinsic to human health risk estimation. A set of nonparametric cubic splines was fitted to identify the nonlinear seasonal trend in leachate quality parameters. LandSim 2.5, a landfill simulator, was used to simulate the landfill activities for various time slices, and further uncertainty in noncarcinogenic human health risk was estimated using a Monte Carlo simulation followed by univariate and multivariate sensitivity analyses. © 2016 Society for Risk Analysis.

  19. Advances in Parameter and Uncertainty Quantification Using Bayesian Hierarchical Techniques with a Spatially Referenced Watershed Model (Invited)

    NASA Astrophysics Data System (ADS)

    Alexander, R. B.; Boyer, E. W.; Schwarz, G. E.; Smith, R. A.

    2013-12-01

    Estimating water and material stores and fluxes in watershed studies is frequently complicated by uncertainties in quantifying hydrological and biogeochemical effects of factors such as land use, soils, and climate. Although these process-related effects are commonly measured and modeled in separate catchments, researchers are especially challenged by their complexity across catchments and diverse environmental settings, leading to a poor understanding of how model parameters and prediction uncertainties vary spatially. To address these concerns, we illustrate the use of Bayesian hierarchical modeling techniques with a dynamic version of the spatially referenced watershed model SPARROW (SPAtially Referenced Regression On Watershed attributes). The dynamic SPARROW model is designed to predict streamflow and other water cycle components (e.g., evapotranspiration, soil and groundwater storage) for monthly varying hydrological regimes, using mechanistic functions, mass conservation constraints, and statistically estimated parameters. In this application, the model domain includes nearly 30,000 NHD (National Hydrologic Data) stream reaches and their associated catchments in the Susquehanna River Basin. We report the results of our comparisons of alternative models of varying complexity, including models with different explanatory variables as well as hierarchical models that account for spatial and temporal variability in model parameters and variance (error) components. The model errors are evaluated for changes with season and catchment size and correlations in time and space. The hierarchical models consist of a two-tiered structure in which climate forcing parameters are modeled as random variables, conditioned on watershed properties. Quantification of spatial and temporal variations in the hydrological parameters and model uncertainties in this approach leads to more efficient (lower variance) and less biased model predictions throughout the river network. Moreover, predictions of water-balance components are reported according to probabilistic metrics (e.g., percentiles, prediction intervals) that include both parameter and model uncertainties. These improvements in predictions of streamflow dynamics can inform the development of more accurate predictions of spatial and temporal variations in biogeochemical stores and fluxes (e.g., nutrients and carbon) in watersheds.

  20. Effects of correlated parameters and uncertainty in electronic-structure-based chemical kinetic modelling

    NASA Astrophysics Data System (ADS)

    Sutton, Jonathan E.; Guo, Wei; Katsoulakis, Markos A.; Vlachos, Dionisios G.

    2016-04-01

    Kinetic models based on first principles are becoming common place in heterogeneous catalysis because of their ability to interpret experimental data, identify the rate-controlling step, guide experiments and predict novel materials. To overcome the tremendous computational cost of estimating parameters of complex networks on metal catalysts, approximate quantum mechanical calculations are employed that render models potentially inaccurate. Here, by introducing correlative global sensitivity analysis and uncertainty quantification, we show that neglecting correlations in the energies of species and reactions can lead to an incorrect identification of influential parameters and key reaction intermediates and reactions. We rationalize why models often underpredict reaction rates and show that, despite the uncertainty being large, the method can, in conjunction with experimental data, identify influential missing reaction pathways and provide insights into the catalyst active site and the kinetic reliability of a model. The method is demonstrated in ethanol steam reforming for hydrogen production for fuel cells.

  1. Dakota Graphical User Interface v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman-Hill, Ernest; Glickman, Matthew; Gibson, Marcus

    Graphical analysis environment for Sandia’s Dakota software for optimization and uncertainty quantification. The Dakota GUI is an interactive graphical analysis environment for creating, running, and interpreting Dakota optimization and uncertainty quantification studies. It includes problem (Dakota study) set-up, option specification, simulation interfacing, analysis execution, and results visualization. Through the use of wizards, templates, and views, Dakota GUI helps uses navigate Dakota’s complex capability landscape.

  2. A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    DTIC Science & Technology

    2014-04-01

    Barrier methods for critical exponent problems in geometric analysis and mathematical physics, J. Erway and M. Holst, Submitted for publication ...TR-14-33 A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics...Problems Approved for public release, distribution is unlimited. April 2014 HDTRA1-09-1-0036 Donald Estep and Michael

  3. Quantification of dose uncertainties for the bladder in prostate cancer radiotherapy based on dominant eigenmodes

    NASA Astrophysics Data System (ADS)

    Rios, Richard; Acosta, Oscar; Lafond, Caroline; Espinosa, Jairo; de Crevoisier, Renaud

    2017-11-01

    In radiotherapy for prostate cancer the dose at the treatment planning for the bladder may be a bad surrogate of the actual delivered dose as the bladder presents the largest inter-fraction shape variations during treatment. This paper presents PCA models as a virtual tool to estimate dosimetric uncertainties for the bladder produced by motion and deformation between fractions. Our goal is to propose a methodology to determine the minimum number of modes required to quantify dose uncertainties of the bladder for motion/deformation models based on PCA. We trained individual PCA models using the bladder contours available from three patients with a planning computed tomography (CT) and on-treatment cone-beam CTs (CBCTs). Based on the above models and via deformable image registration (DIR), we estimated two accumulated doses: firstly, an accumulated dose obtained by integrating the planning dose over the Gaussian probability distribution of the PCA model; and secondly, an accumulated dose obtained by simulating treatment courses via a Monte Carlo approach. We also computed a reference accumulated dose for each patient using his available images via DIR. Finally, we compared the planning dose with the three accumulated doses, and we calculated local dose variability and dose-volume histogram uncertainties.

  4. A comparative experimental evaluation of uncertainty estimation methods for two-component PIV

    NASA Astrophysics Data System (ADS)

    Boomsma, Aaron; Bhattacharya, Sayantan; Troolin, Dan; Pothos, Stamatios; Vlachos, Pavlos

    2016-09-01

    Uncertainty quantification in planar particle image velocimetry (PIV) measurement is critical for proper assessment of the quality and significance of reported results. New uncertainty estimation methods have been recently introduced generating interest about their applicability and utility. The present study compares and contrasts current methods, across two separate experiments and three software packages in order to provide a diversified assessment of the methods. We evaluated the performance of four uncertainty estimation methods, primary peak ratio (PPR), mutual information (MI), image matching (IM) and correlation statistics (CS). The PPR method was implemented and tested in two processing codes, using in-house open source PIV processing software (PRANA, Purdue University) and Insight4G (TSI, Inc.). The MI method was evaluated in PRANA, as was the IM method. The CS method was evaluated using DaVis (LaVision, GmbH). Utilizing two PIV systems for high and low-resolution measurements and a laser doppler velocimetry (LDV) system, data were acquired in a total of three cases: a jet flow and a cylinder in cross flow at two Reynolds numbers. LDV measurements were used to establish a point validation against which the high-resolution PIV measurements were validated. Subsequently, the high-resolution PIV measurements were used as a reference against which the low-resolution PIV data were assessed for error and uncertainty. We compared error and uncertainty distributions, spatially varying RMS error and RMS uncertainty, and standard uncertainty coverages. We observed that qualitatively, each method responded to spatially varying error (i.e. higher error regions resulted in higher uncertainty predictions in that region). However, the PPR and MI methods demonstrated reduced uncertainty dynamic range response. In contrast, the IM and CS methods showed better response, but under-predicted the uncertainty ranges. The standard coverages (68% confidence interval) ranged from approximately 65%-77% for PPR and MI methods, 40%-50% for IM and near 50% for CS. These observations illustrate some of the strengths and weaknesses of the methods considered herein and identify future directions for development and improvement.

  5. Uncertainty quantification and propagation of errors of the Lennard-Jones 12-6 parameters for n-alkanes

    PubMed Central

    Knotts, Thomas A.

    2017-01-01

    Molecular simulation has the ability to predict various physical properties that are difficult to obtain experimentally. For example, we implement molecular simulation to predict the critical constants (i.e., critical temperature, critical density, critical pressure, and critical compressibility factor) for large n-alkanes that thermally decompose experimentally (as large as C48). Historically, molecular simulation has been viewed as a tool that is limited to providing qualitative insight. One key reason for this perceived weakness in molecular simulation is the difficulty to quantify the uncertainty in the results. This is because molecular simulations have many sources of uncertainty that propagate and are difficult to quantify. We investigate one of the most important sources of uncertainty, namely, the intermolecular force field parameters. Specifically, we quantify the uncertainty in the Lennard-Jones (LJ) 12-6 parameters for the CH4, CH3, and CH2 united-atom interaction sites. We then demonstrate how the uncertainties in the parameters lead to uncertainties in the saturated liquid density and critical constant values obtained from Gibbs Ensemble Monte Carlo simulation. Our results suggest that the uncertainties attributed to the LJ 12-6 parameters are small enough that quantitatively useful estimates of the saturated liquid density and the critical constants can be obtained from molecular simulation. PMID:28527455

  6. Uncertainty analysis of gross primary production partitioned from net ecosystem exchange measurements

    NASA Astrophysics Data System (ADS)

    Raj, R.; Hamm, N. A. S.; van der Tol, C.; Stein, A.

    2015-08-01

    Gross primary production (GPP), separated from flux tower measurements of net ecosystem exchange (NEE) of CO2, is used increasingly to validate process-based simulators and remote sensing-derived estimates of simulated GPP at various time steps. Proper validation should include the uncertainty associated with this separation at different time steps. This can be achieved by using a Bayesian framework. In this study, we estimated the uncertainty in GPP at half hourly time steps. We used a non-rectangular hyperbola (NRH) model to separate GPP from flux tower measurements of NEE at the Speulderbos forest site, The Netherlands. The NRH model included the variables that influence GPP, in particular radiation, and temperature. In addition, the NRH model provided a robust empirical relationship between radiation and GPP by including the degree of curvature of the light response curve. Parameters of the NRH model were fitted to the measured NEE data for every 10-day period during the growing season (April to October) in 2009. Adopting a Bayesian approach, we defined the prior distribution of each NRH parameter. Markov chain Monte Carlo (MCMC) simulation was used to update the prior distribution of each NRH parameter. This allowed us to estimate the uncertainty in the separated GPP at half-hourly time steps. This yielded the posterior distribution of GPP at each half hour and allowed the quantification of uncertainty. The time series of posterior distributions thus obtained allowed us to estimate the uncertainty at daily time steps. We compared the informative with non-informative prior distributions of the NRH parameters. The results showed that both choices of prior produced similar posterior distributions GPP. This will provide relevant and important information for the validation of process-based simulators in the future. Furthermore, the obtained posterior distributions of NEE and the NRH parameters are of interest for a range of applications.

  7. Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods

    NASA Astrophysics Data System (ADS)

    Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.

    2011-12-01

    Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.

  8. Uncertainty Quantification Techniques of SCALE/TSUNAMI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Mueller, Don

    2011-01-01

    The Standardized Computer Analysis for Licensing Evaluation (SCALE) code system developed at Oak Ridge National Laboratory (ORNL) includes Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI). The TSUNAMI code suite can quantify the predicted change in system responses, such as k{sub eff}, reactivity differences, or ratios of fluxes or reaction rates, due to changes in the energy-dependent, nuclide-reaction-specific cross-section data. Where uncertainties in the neutron cross-section data are available, the sensitivity of the system to the cross-section data can be applied to propagate the uncertainties in the cross-section data to an uncertainty in the system response. Uncertainty quantification ismore » useful for identifying potential sources of computational biases and highlighting parameters important to code validation. Traditional validation techniques often examine one or more average physical parameters to characterize a system and identify applicable benchmark experiments. However, with TSUNAMI correlation coefficients are developed by propagating the uncertainties in neutron cross-section data to uncertainties in the computed responses for experiments and safety applications through sensitivity coefficients. The bias in the experiments, as a function of their correlation coefficient with the intended application, is extrapolated to predict the bias and bias uncertainty in the application through trending analysis or generalized linear least squares techniques, often referred to as 'data adjustment.' Even with advanced tools to identify benchmark experiments, analysts occasionally find that the application models include some feature or material for which adequately similar benchmark experiments do not exist to support validation. For example, a criticality safety analyst may want to take credit for the presence of fission products in spent nuclear fuel. In such cases, analysts sometimes rely on 'expert judgment' to select an additional administrative margin to account for gap in the validation data or to conclude that the impact on the calculated bias and bias uncertainty is negligible. As a result of advances in computer programs and the evolution of cross-section covariance data, analysts can use the sensitivity and uncertainty analysis tools in the TSUNAMI codes to estimate the potential impact on the application-specific bias and bias uncertainty resulting from nuclides not represented in available benchmark experiments. This paper presents the application of methods described in a companion paper.« less

  9. USACM Thematic Workshop On Uncertainty Quantification And Data-Driven Modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, James R.

    The USACM Thematic Workshop on Uncertainty Quantification and Data-Driven Modeling was held on March 23-24, 2017, in Austin, TX. The organizers of the technical program were James R. Stewart of Sandia National Laboratories and Krishna Garikipati of University of Michigan. The administrative organizer was Ruth Hengst, who serves as Program Coordinator for the USACM. The organization of this workshop was coordinated through the USACM Technical Thrust Area on Uncertainty Quantification and Probabilistic Analysis. The workshop website (http://uqpm2017.usacm.org) includes the presentation agenda as well as links to several of the presentation slides (permission to access the presentations was granted by eachmore » of those speakers, respectively). Herein, this final report contains the complete workshop program that includes the presentation agenda, the presentation abstracts, and the list of posters.« less

  10. Complex Systems Models and Their Applications: Towards a New Science of Verification, Validation & Uncertainty Quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsao, Jeffrey Y.; Trucano, Timothy G.; Kleban, Stephen D.

    This report contains the written footprint of a Sandia-hosted workshop held in Albuquerque, New Mexico, June 22-23, 2016 on “Complex Systems Models and Their Applications: Towards a New Science of Verification, Validation and Uncertainty Quantification,” as well as of pre-work that fed into the workshop. The workshop’s intent was to explore and begin articulating research opportunities at the intersection between two important Sandia communities: the complex systems (CS) modeling community, and the verification, validation and uncertainty quantification (VVUQ) community The overarching research opportunity (and challenge) that we ultimately hope to address is: how can we quantify the credibility of knowledgemore » gained from complex systems models, knowledge that is often incomplete and interim, but will nonetheless be used, sometimes in real-time, by decision makers?« less

  11. Probabilistic models and uncertainty quantification for the ionization reaction rate of atomic Nitrogen

    NASA Astrophysics Data System (ADS)

    Miki, K.; Panesi, M.; Prudencio, E. E.; Prudhomme, S.

    2012-05-01

    The objective in this paper is to analyze some stochastic models for estimating the ionization reaction rate constant of atomic Nitrogen (N + e- → N+ + 2e-). Parameters of the models are identified by means of Bayesian inference using spatially resolved absolute radiance data obtained from the Electric Arc Shock Tube (EAST) wind-tunnel. The proposed methodology accounts for uncertainties in the model parameters as well as physical model inadequacies, providing estimates of the rate constant that reflect both types of uncertainties. We present four different probabilistic models by varying the error structure (either additive or multiplicative) and by choosing different descriptions of the statistical correlation among data points. In order to assess the validity of our methodology, we first present some calibration results obtained with manufactured data and then proceed by using experimental data collected at EAST experimental facility. In order to simulate the radiative signature emitted in the shock-heated air plasma, we use a one-dimensional flow solver with Park's two-temperature model that simulates non-equilibrium effects. We also discuss the implications of the choice of the stochastic model on the estimation of the reaction rate and its uncertainties. Our analysis shows that the stochastic models based on correlated multiplicative errors are the most plausible models among the four models proposed in this study. The rate of the atomic Nitrogen ionization is found to be (6.2 ± 3.3) × 1011 cm3 mol-1 s-1 at 10,000 K.

  12. Combining Relevance Vector Machines and exponential regression for bearing residual life estimation

    NASA Astrophysics Data System (ADS)

    Di Maio, Francesco; Tsui, Kwok Leung; Zio, Enrico

    2012-08-01

    In this paper we present a new procedure for estimating the bearing Residual Useful Life (RUL) by combining data-driven and model-based techniques. Respectively, we resort to (i) Relevance Vector Machines (RVMs) for selecting a low number of significant basis functions, called Relevant Vectors (RVs), and (ii) exponential regression to compute and continuously update residual life estimations. The combination of these techniques is developed with reference to partially degraded thrust ball bearings and tested on real world vibration-based degradation data. On the case study considered, the proposed procedure outperforms other model-based methods, with the added value of an adequate representation of the uncertainty associated to the estimates of the quantification of the credibility of the results by the Prognostic Horizon (PH) metric.

  13. Application of at-site peak-streamflow frequency analyses for very low annual exceedance probabilities

    USGS Publications Warehouse

    Asquith, William H.; Kiang, Julie E.; Cohn, Timothy A.

    2017-07-17

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Nuclear Regulatory Commission, has investigated statistical methods for probabilistic flood hazard assessment to provide guidance on very low annual exceedance probability (AEP) estimation of peak-streamflow frequency and the quantification of corresponding uncertainties using streamgage-specific data. The term “very low AEP” implies exceptionally rare events defined as those having AEPs less than about 0.001 (or 1 × 10–3 in scientific notation or for brevity 10–3). Such low AEPs are of great interest to those involved with peak-streamflow frequency analyses for critical infrastructure, such as nuclear power plants. Flood frequency analyses at streamgages are most commonly based on annual instantaneous peak streamflow data and a probability distribution fit to these data. The fitted distribution provides a means to extrapolate to very low AEPs. Within the United States, the Pearson type III probability distribution, when fit to the base-10 logarithms of streamflow, is widely used, but other distribution choices exist. The USGS-PeakFQ software, implementing the Pearson type III within the Federal agency guidelines of Bulletin 17B (method of moments) and updates to the expected moments algorithm (EMA), was specially adapted for an “Extended Output” user option to provide estimates at selected AEPs from 10–3 to 10–6. Parameter estimation methods, in addition to product moments and EMA, include L-moments, maximum likelihood, and maximum product of spacings (maximum spacing estimation). This study comprehensively investigates multiple distributions and parameter estimation methods for two USGS streamgages (01400500 Raritan River at Manville, New Jersey, and 01638500 Potomac River at Point of Rocks, Maryland). The results of this study specifically involve the four methods for parameter estimation and up to nine probability distributions, including the generalized extreme value, generalized log-normal, generalized Pareto, and Weibull. Uncertainties in streamflow estimates for corresponding AEP are depicted and quantified as two primary forms: quantile (aleatoric [random sampling] uncertainty) and distribution-choice (epistemic [model] uncertainty). Sampling uncertainties of a given distribution are relatively straightforward to compute from analytical or Monte Carlo-based approaches. Distribution-choice uncertainty stems from choices of potentially applicable probability distributions for which divergence among the choices increases as AEP decreases. Conventional goodness-of-fit statistics, such as Cramér-von Mises, and L-moment ratio diagrams are demonstrated in order to hone distribution choice. The results generally show that distribution choice uncertainty is larger than sampling uncertainty for very low AEP values.

  14. Uncertainty Quantification of Evapotranspiration and Infiltration from Modeling and Historic Time Series at the Savannah River F-Area

    NASA Astrophysics Data System (ADS)

    Faybishenko, B.; Flach, G. P.

    2012-12-01

    The objectives of this presentation are: (a) to illustrate the application of Monte Carlo and fuzzy-probabilistic approaches for uncertainty quantification (UQ) in predictions of potential evapotranspiration (PET), actual evapotranspiration (ET), and infiltration (I), using uncertain hydrological or meteorological time series data, and (b) to compare the results of these calculations with those from field measurements at the U.S. Department of Energy Savannah River Site (SRS), near Aiken, South Carolina, USA. The UQ calculations include the evaluation of aleatory (parameter uncertainty) and epistemic (model) uncertainties. The effect of aleatory uncertainty is expressed by assigning the probability distributions of input parameters, using historical monthly averaged data from the meteorological station at the SRS. The combined effect of aleatory and epistemic uncertainties on the UQ of PET, ET, and Iis then expressed by aggregating the results of calculations from multiple models using a p-box and fuzzy numbers. The uncertainty in PETis calculated using the Bair-Robertson, Blaney-Criddle, Caprio, Hargreaves-Samani, Hamon, Jensen-Haise, Linacre, Makkink, Priestly-Taylor, Penman, Penman-Monteith, Thornthwaite, and Turc models. Then, ET is calculated from the modified Budyko model, followed by calculations of I from the water balance equation. We show that probabilistic and fuzzy-probabilistic calculations using multiple models generate the PET, ET, and Idistributions, which are well within the range of field measurements. We also show that a selection of a subset of models can be used to constrain the uncertainty quantification of PET, ET, and I.

  15. Parameter-induced uncertainty quantification of a regional N2O and NO3 inventory using the biogeochemical model LandscapeDNDC

    NASA Astrophysics Data System (ADS)

    Haas, Edwin; Klatt, Steffen; Kraus, David; Werner, Christian; Ruiz, Ignacio Santa Barbara; Kiese, Ralf; Butterbach-Bahl, Klaus

    2014-05-01

    Numerical simulation models are increasingly used to estimate greenhouse gas emissions at site to regional and national scales and are outlined as the most advanced methodology (Tier 3) for national emission inventory in the framework of UNFCCC reporting. Process-based models incorporate the major processes of the carbon and nitrogen cycle of terrestrial ecosystems like arable land and grasslands and are thus thought to be widely applicable at various spatial and temporal scales. The high complexity of ecosystem processes mirrored by such models requires a large number of model parameters. Many of those parameters are lumped parameters describing simultaneously the effect of environmental drivers on e.g. microbial community activity and individual processes. Thus, the precise quantification of true parameter states is often difficult or even impossible. As a result model uncertainty is not solely originating from input uncertainty but also subject to parameter-induced uncertainty. In this study we quantify regional parameter-induced model uncertainty on nitrous oxide (N2O) emissions and nitrate (NO3) leaching from arable soils of Saxony (Germany) using the biogeochemical model LandscapeDNDC. For this we calculate a regional inventory using a joint parameter distribution for key parameters describing microbial C and N turnover processes as obtained by a Bayesian calibration study. We representatively sampled 400 different parameter vectors from the discrete joint parameter distribution comprising approximately 400,000 parameter combinations and used these to calculate 400 individual realizations of the regional inventory. The spatial domain (represented by 4042 polygons) is set up with spatially explicit soil and climate information and a region-typical 3-year crop rotation consisting of winter wheat, rape- seed, and winter barley. Average N2O emission from arable soils in the state of Saxony across all 400 realizations was 1.43 ± 1.25 [kg N / ha] with a median value of 1.05 [kg N / ha]. Using the default IPCC emission factor approach (Tier 1) for direct emissions reveal a higher average N2O emission of 1.51 [kg N / ha] due to fertilizer use. In the regional uncertainty quantification the 20% likelihood range for N2O emissions is 0.79 - 1.37 [kg N / ha] (50% likelihood: 0.46 - 2.05 [kg N / ha]; 90% likelihood: 0.11 - 4.03 [kg N / ha]). Respective quantities were calculated for nitrate leaching. The method has proven its applicability to quantify parameter-induced uncertainty of simulated regional greenhouse gas emission and nitrate leaching inventories using process based biogeochemical models.

  16. Development and Validation of a Lifecycle-based Prognostics Architecture with Test Bed Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hines, J. Wesley; Upadhyaya, Belle; Sharp, Michael

    On-line monitoring and tracking of nuclear plant system and component degradation is being investigated as a method for improving the safety, reliability, and maintainability of aging nuclear power plants. Accurate prediction of the current degradation state of system components and structures is important for accurate estimates of their remaining useful life (RUL). The correct quantification and propagation of both the measurement uncertainty and model uncertainty is necessary for quantifying the uncertainty of the RUL prediction. This research project developed and validated methods to perform RUL estimation throughout the lifecycle of plant components. Prognostic methods should seamlessly operate from beginning ofmore » component life (BOL) to end of component life (EOL). We term this "Lifecycle Prognostics." When a component is put into use, the only information available may be past failure times of similar components used in similar conditions, and the predicted failure distribution can be estimated with reliability methods such as Weibull Analysis (Type I Prognostics). As the component operates, it begins to degrade and consume its available life. This life consumption may be a function of system stresses, and the failure distribution should be updated to account for the system operational stress levels (Type II Prognostics). When degradation becomes apparent, this information can be used to again improve the RUL estimate (Type III Prognostics). This research focused on developing prognostics algorithms for the three types of prognostics, developing uncertainty quantification methods for each of the algorithms, and, most importantly, developing a framework using Bayesian methods to transition between prognostic model types and update failure distribution estimates as new information becomes available. The developed methods were then validated on a range of accelerated degradation test beds. The ultimate goal of prognostics is to provide an accurate assessment for RUL predictions, with as little uncertainty as possible. From a reliability and maintenance standpoint, there would be improved safety by avoiding all failures. Calculated risk would decrease, saving money by avoiding unnecessary maintenance. One major bottleneck for data-driven prognostics is the availability of run-to-failure degradation data. Without enough degradation data leading to failure, prognostic models can yield RUL distributions with large uncertainty or mathematically unsound predictions. To address these issues a "Lifecycle Prognostics" method was developed to create RUL distributions from Beginning of Life (BOL) to End of Life (EOL). This employs established Type I, II, and III prognostic methods, and Bayesian transitioning between each Type. Bayesian methods, as opposed to classical frequency statistics, show how an expected value, a priori, changes with new data to form a posterior distribution. For example, when you purchase a component you have a prior belief, or estimation, of how long it will operate before failing. As you operate it, you may collect information related to its condition that will allow you to update your estimated failure time. Bayesian methods are best used when limited data are available. The use of a prior also means that information is conserved when new data are available. The weightings of the prior belief and information contained in the sampled data are dependent on the variance (uncertainty) of the prior, the variance (uncertainty) of the data, and the amount of measured data (number of samples). If the variance of the prior is small compared to the uncertainty of the data, the prior will be weighed more heavily. However, as more data are collected, the data will be weighted more heavily and will eventually swamp out the prior in calculating the posterior distribution of model parameters. Fundamentally Bayesian analysis updates a prior belief with new data to get a posterior belief. The general approach to applying the Bayesian method to lifecycle prognostics consisted of identifying the prior, which is the RUL estimate and uncertainty from the previous prognostics type, and combining it with observational data related to the newer prognostics type. The resulting lifecycle prognostics algorithm uses all available information throughout the component lifecycle.« less

  17. Quantification of Uncertainty in the Flood Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Kasiapillai Sudalaimuthu, K.; He, J.; Swami, D.

    2017-12-01

    Flood frequency analysis (FFA) is usually carried out for planning and designing of water resources and hydraulic structures. Owing to the existence of variability in sample representation, selection of distribution and estimation of distribution parameters, the estimation of flood quantile has been always uncertain. Hence, suitable approaches must be developed to quantify the uncertainty in the form of prediction interval as an alternate to deterministic approach. The developed framework in the present study to include uncertainty in the FFA discusses a multi-objective optimization approach to construct the prediction interval using ensemble of flood quantile. Through this approach, an optimal variability of distribution parameters is identified to carry out FFA. To demonstrate the proposed approach, annual maximum flow data from two gauge stations (Bow river at Calgary and Banff, Canada) are used. The major focus of the present study was to evaluate the changes in magnitude of flood quantiles due to the recent extreme flood event occurred during the year 2013. In addition, the efficacy of the proposed method was further verified using standard bootstrap based sampling approaches and found that the proposed method is reliable in modeling extreme floods as compared to the bootstrap methods.

  18. Uncertainty quantification of resonant ultrasound spectroscopy for material property and single crystal orientation estimation on a complex part

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Mayes, Alexander; Jauriqui, Leanne; Biedermann, Eric; Heffernan, Julieanne; Livings, Richard; Goodlet, Brent; Mazdiyasni, Siamack

    2018-04-01

    A case study is presented evaluating uncertainty in Resonance Ultrasound Spectroscopy (RUS) inversion for a single crystal (SX) Ni-based superalloy Mar-M247 cylindrical dog-bone specimens. A number of surrogate models were developed with FEM model solutions, using different sampling schemes (regular grid, Monte Carlo sampling, Latin Hyper-cube sampling) and model approaches, N-dimensional cubic spline interpolation and Kriging. Repeated studies were used to quantify the well-posedness of the inversion problem, and the uncertainty was assessed in material property and crystallographic orientation estimates given typical geometric dimension variability in aerospace components. Surrogate model quality was found to be an important factor in inversion results when the model more closely represents the test data. One important discovery was when the model matches well with test data, a Kriging surrogate model using un-sorted Latin Hypercube sampled data performed as well as the best results from an N-dimensional interpolation model using sorted data. However, both surrogate model quality and mode sorting were found to be less critical when inverting properties from either experimental data or simulated test cases with uncontrolled geometric variation.

  19. How well do we know the incoming solar infrared radiation?

    NASA Astrophysics Data System (ADS)

    Elsey, Jonathan; Coleman, Marc; Gardiner, Tom; Shine, Keith

    2017-04-01

    The solar spectral irradiance (SSI) has been identified as a key climate variable by the Global Climate Observing System (Bojinski et al. 2014, Bull. Amer. Meteor. Soc.). It is of importance in the modelling of atmospheric radiative transfer, and the quantification of the global energy budget. However, in the near-infrared spectral region (between 2000-10000 cm-1) there exists a discrepancy of 7% between spectra measured from the space-based SOLSPEC instrument (Thuillier et al. 2015, Solar Physics) and those from a ground-based Langley technique (Bolseé et al. 2014, Solar Physics). This same difference is also present between different analyses of the SOLSPEC data. This work aims to reconcile some of these differences by presenting an estimate of the near-infrared SSI obtained from ground-based measurements taken using an absolutely calibrated Fourier transform spectrometer. Spectra are obtained both using the Langley technique and by direct comparison with a radiative transfer model, with appropriate handling of both aerosol scattering and molecular continuum absorption. Particular focus is dedicated to the quantification of uncertainty in these spectra, from both the inherent uncertainty in the measurement setup and that from the use of the radiative transfer code and its inputs.

  20. Uncertainty Quantification and Assessment of CO2 Leakage in Groundwater Aquifers

    NASA Astrophysics Data System (ADS)

    Carroll, S.; Mansoor, K.; Sun, Y.; Jones, E.

    2011-12-01

    Complexity of subsurface aquifers and the geochemical reactions that control drinking water compositions complicate our ability to estimate the impact of leaking CO2 on groundwater quality. We combined lithologic field data from the High Plains Aquifer, numerical simulations, and uncertainty quantification analysis to assess the role of aquifer heterogeneity and physical transport on the extent of CO2 impacted plume over a 100-year period. The High Plains aquifer is a major aquifer over much of the central United States where CO2 may be sequestered in depleted oil and gas reservoirs or deep saline formations. Input parameters considered included, aquifer heterogeneity, permeability, porosity, regional groundwater flow, CO2 and TDS leakage rates over time, and the number of leakage source points. Sensitivity analysis suggest that variations in sand and clay permeability, correlation lengths, van Genuchten parameters, and CO2 leakage rate have the greatest impact on impacted volume or maximum distance from the leak source. A key finding is that relative sensitivity of the parameters changes over the 100-year period. Reduced order models developed from regression of the numerical simulations show that volume of the CO2-impacted aquifer increases over time with 2 order of magnitude variance.

  1. Uncertainty Quantification and Regional Sensitivity Analysis of Snow-related Parameters in the Canadian LAnd Surface Scheme (CLASS)

    NASA Astrophysics Data System (ADS)

    Badawy, B.; Fletcher, C. G.

    2017-12-01

    The parameterization of snow processes in land surface models is an important source of uncertainty in climate simulations. Quantifying the importance of snow-related parameters, and their uncertainties, may therefore lead to better understanding and quantification of uncertainty within integrated earth system models. However, quantifying the uncertainty arising from parameterized snow processes is challenging due to the high-dimensional parameter space, poor observational constraints, and parameter interaction. In this study, we investigate the sensitivity of the land simulation to uncertainty in snow microphysical parameters in the Canadian LAnd Surface Scheme (CLASS) using an uncertainty quantification (UQ) approach. A set of training cases (n=400) from CLASS is used to sample each parameter across its full range of empirical uncertainty, as determined from available observations and expert elicitation. A statistical learning model using support vector regression (SVR) is then constructed from the training data (CLASS output variables) to efficiently emulate the dynamical CLASS simulations over a much larger (n=220) set of cases. This approach is used to constrain the plausible range for each parameter using a skill score, and to identify the parameters with largest influence on the land simulation in CLASS at global and regional scales, using a random forest (RF) permutation importance algorithm. Preliminary sensitivity tests indicate that snow albedo refreshment threshold and the limiting snow depth, below which bare patches begin to appear, have the highest impact on snow output variables. The results also show a considerable reduction of the plausible ranges of the parameters values and hence reducing their uncertainty ranges, which can lead to a significant reduction of the model uncertainty. The implementation and results of this study will be presented and discussed in details.

  2. Final Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef; Conrad, Patrick; Bigoni, Daniele

    QUEST (\\url{www.quest-scidac.org}) is a SciDAC Institute that is focused on uncertainty quantification (UQ) in large-scale scientific computations. Our goals are to (1) advance the state of the art in UQ mathematics, algorithms, and software; and (2) provide modeling, algorithmic, and general UQ expertise, together with software tools, to other SciDAC projects, thereby enabling and guiding a broad range of UQ activities in their respective contexts. QUEST is a collaboration among six institutions (Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University) with a historymore » of joint UQ research. Our vision encompasses all aspects of UQ in leadership-class computing. This includes the well-founded setup of UQ problems; characterization of the input space given available data/information; local and global sensitivity analysis; adaptive dimensionality and order reduction; forward and inverse propagation of uncertainty; handling of application code failures, missing data, and hardware/software fault tolerance; and model inadequacy, comparison, validation, selection, and averaging. The nature of the UQ problem requires the seamless combination of data, models, and information across this landscape in a manner that provides a self-consistent quantification of requisite uncertainties in predictions from computational models. Accordingly, our UQ methods and tools span an interdisciplinary space across applied math, information theory, and statistics. The MIT QUEST effort centers on statistical inference and methods for surrogate or reduced-order modeling. MIT personnel have been responsible for the development of adaptive sampling methods, methods for approximating computationally intensive models, and software for both forward uncertainty propagation and statistical inverse problems. A key software product of the MIT QUEST effort is the MIT Uncertainty Quantification library, called MUQ (\\url{muq.mit.edu}).« less

  3. Dynamic identification of axial force and boundary restraints in tie rods and cables with uncertainty quantification using Set Inversion Via Interval Analysis

    NASA Astrophysics Data System (ADS)

    Kernicky, Timothy; Whelan, Matthew; Al-Shaer, Ehab

    2018-06-01

    A methodology is developed for the estimation of internal axial force and boundary restraints within in-service, prismatic axial force members of structural systems using interval arithmetic and contractor programming. The determination of the internal axial force and end restraints in tie rods and cables using vibration-based methods has been a long standing problem in the area of structural health monitoring and performance assessment. However, for structural members with low slenderness where the dynamics are significantly affected by the boundary conditions, few existing approaches allow for simultaneous identification of internal axial force and end restraints and none permit for quantifying the uncertainties in the parameter estimates due to measurement uncertainties. This paper proposes a new technique for approaching this challenging inverse problem that leverages the Set Inversion Via Interval Analysis algorithm to solve for the unknown axial forces and end restraints using natural frequency measurements. The framework developed offers the ability to completely enclose the feasible solutions to the parameter identification problem, given specified measurement uncertainties for the natural frequencies. This ability to propagate measurement uncertainty into the parameter space is critical towards quantifying the confidence in the individual parameter estimates to inform decision-making within structural health diagnosis and prognostication applications. The methodology is first verified with simulated data for a case with unknown rotational end restraints and then extended to a case with unknown translational and rotational end restraints. A laboratory experiment is then presented to demonstrate the application of the methodology to an axially loaded rod with progressively increased end restraint at one end.

  4. The role of correlations in uncertainty quantification of transportation relevant fuel models

    DOE PAGES

    Fridlyand, Aleksandr; Johnson, Matthew S.; Goldsborough, S. Scott; ...

    2017-02-03

    Large reaction mechanisms are often used to describe the combustion behavior of transportation-relevant fuels like gasoline, where these are typically represented by surrogate blends, e.g., n-heptane/iso-octane/toluene. We describe efforts to quantify the uncertainty in the predictions of such mechanisms at realistic engine conditions, seeking to better understand the robustness of the model as well as the important reaction pathways and their impacts on combustion behavior. In this work, we examine the importance of taking into account correlations among reactions that utilize the same rate rules and those with multiple product channels on forward propagation of uncertainty by Monte Carlo simulations.more » Automated means are developed to generate the uncertainty factor assignment for a detailed chemical kinetic mechanism, by first uniquely identifying each reacting species, then sorting each of the reactions based on the rate rule utilized. Simulation results reveal that in the low temperature combustion regime for iso-octane, the majority of the uncertainty in the model predictions can be attributed to low temperature reactions of the fuel sub-mechanism. The foundational, or small-molecule chemistry (C 0-C 4) only contributes significantly to uncertainties in the predictions at the highest temperatures (Tc=900 K). Accounting for correlations between important reactions is shown to produce non-negligible differences in the estimates of uncertainty. Including correlations among reactions that use the same rate rules increases uncertainty in the model predictions, while accounting for correlations among reactions with multiple branches decreases uncertainty in some cases. Significant non-linear response is observed in the model predictions depending on how the probability distributions of the uncertain rate constants are defined.Finally, we concluded that care must be exercised in defining these probability distributions in order to reduce bias, and physically unrealistic estimates in the forward propagation of uncertainty for a range of UQ activities.« less

  5. Uncertainty Reduction using Bayesian Inference and Sensitivity Analysis: A Sequential Approach to the NASA Langley Uncertainty Quantification Challenge

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar

    2016-01-01

    This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.

  6. A General Uncertainty Quantification Methodology for Cloud Microphysical Property Retrievals

    NASA Astrophysics Data System (ADS)

    Tang, Q.; Xie, S.; Chen, X.; Zhao, C.

    2014-12-01

    The US Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) program provides long-term (~20 years) ground-based cloud remote sensing observations. However, there are large uncertainties in the retrieval products of cloud microphysical properties based on the active and/or passive remote-sensing measurements. To address this uncertainty issue, a DOE Atmospheric System Research scientific focus study, Quantification of Uncertainties in Cloud Retrievals (QUICR), has been formed. In addition to an overview of recent progress of QUICR, we will demonstrate the capacity of an observation-based general uncertainty quantification (UQ) methodology via the ARM Climate Research Facility baseline cloud microphysical properties (MICROBASE) product. This UQ method utilizes the Karhunen-Loéve expansion (KLE) and Central Limit Theorems (CLT) to quantify the retrieval uncertainties from observations and algorithm parameters. The input perturbations are imposed on major modes to take into account the cross correlations between input data, which greatly reduces the dimension of random variables (up to a factor of 50) and quantifies vertically resolved full probability distribution functions of retrieved quantities. Moreover, this KLE/CLT approach has the capability of attributing the uncertainties in the retrieval output to individual uncertainty source and thus sheds light on improving the retrieval algorithm and observations. We will present the results of a case study for the ice water content at the Southern Great Plains during an intensive observing period on March 9, 2000. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  7. Uncertainty quantification and validation of 3D lattice scaffolds for computer-aided biomedical applications.

    PubMed

    Gorguluarslan, Recep M; Choi, Seung-Kyum; Saldana, Christopher J

    2017-07-01

    A methodology is proposed for uncertainty quantification and validation to accurately predict the mechanical response of lattice structures used in the design of scaffolds. Effective structural properties of the scaffolds are characterized using a developed multi-level stochastic upscaling process that propagates the quantified uncertainties at strut level to the lattice structure level. To obtain realistic simulation models for the stochastic upscaling process and minimize the experimental cost, high-resolution finite element models of individual struts were reconstructed from the micro-CT scan images of lattice structures which are fabricated by selective laser melting. The upscaling method facilitates the process of determining homogenized strut properties to reduce the computational cost of the detailed simulation model for the scaffold. Bayesian Information Criterion is utilized to quantify the uncertainties with parametric distributions based on the statistical data obtained from the reconstructed strut models. A systematic validation approach that can minimize the experimental cost is also developed to assess the predictive capability of the stochastic upscaling method used at the strut level and lattice structure level. In comparison with physical compression test results, the proposed methodology of linking the uncertainty quantification with the multi-level stochastic upscaling method enabled an accurate prediction of the elastic behavior of the lattice structure with minimal experimental cost by accounting for the uncertainties induced by the additive manufacturing process. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. CASL Dakota Capabilities Summary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Simmons, Chris; Williams, Brian J.

    2017-10-10

    The Dakota software project serves the mission of Sandia National Laboratories and supports a worldwide user community by delivering state-of-the-art research and robust, usable software for optimization and uncertainty quantification. These capabilities enable advanced exploration and riskinformed prediction with a wide range of computational science and engineering models. Dakota is the verification and validation (V&V) / uncertainty quantification (UQ) software delivery vehicle for CASL, allowing analysts across focus areas to apply these capabilities to myriad nuclear engineering analyses.

  9. Itô-SDE MCMC method for Bayesian characterization of errors associated with data limitations in stochastic expansion methods for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Arnst, M.; Abello Álvarez, B.; Ponthot, J.-P.; Boman, R.

    2017-11-01

    This paper is concerned with the characterization and the propagation of errors associated with data limitations in polynomial-chaos-based stochastic methods for uncertainty quantification. Such an issue can arise in uncertainty quantification when only a limited amount of data is available. When the available information does not suffice to accurately determine the probability distributions that must be assigned to the uncertain variables, the Bayesian method for assigning these probability distributions becomes attractive because it allows the stochastic model to account explicitly for insufficiency of the available information. In previous work, such applications of the Bayesian method had already been implemented by using the Metropolis-Hastings and Gibbs Markov Chain Monte Carlo (MCMC) methods. In this paper, we present an alternative implementation, which uses an alternative MCMC method built around an Itô stochastic differential equation (SDE) that is ergodic for the Bayesian posterior. We draw together from the mathematics literature a number of formal properties of this Itô SDE that lend support to its use in the implementation of the Bayesian method, and we describe its discretization, including the choice of the free parameters, by using the implicit Euler method. We demonstrate the proposed methodology on a problem of uncertainty quantification in a complex nonlinear engineering application relevant to metal forming.

  10. A Cascade Approach to Uncertainty Estimation for the Hydrological Simulation of Droughts

    NASA Astrophysics Data System (ADS)

    Smith, Katie; Tanguy, Maliko; Parry, Simon; Prudhomme, Christel

    2016-04-01

    Uncertainty poses a significant challenge in environmental research and the characterisation and quantification of uncertainty has become a research priority over the past decade. Studies of extreme events are particularly affected by issues of uncertainty. This study focusses on the sources of uncertainty in the modelling of streamflow droughts in the United Kingdom. Droughts are a poorly understood natural hazard with no universally accepted definition. Meteorological, hydrological and agricultural droughts have different meanings and vary both spatially and temporally, yet each is inextricably linked. The work presented here is part of two extensive interdisciplinary projects investigating drought reconstruction and drought forecasting capabilities in the UK. Lumped catchment models are applied to simulate streamflow drought, and uncertainties from 5 different sources are investigated: climate input data, potential evapotranspiration (PET) method, hydrological model, within model structure, and model parameterisation. Latin Hypercube sampling is applied to develop large parameter ensembles for each model structure which are run using parallel computing on a high performance computer cluster. Parameterisations are assessed using a multi-objective evaluation criteria which includes both general and drought performance metrics. The effect of different climate input data and PET methods on model output is then considered using the accepted model parameterisations. The uncertainty from each of the sources creates a cascade, and when presented as such the relative importance of each aspect of uncertainty can be determined.

  11. Estimating air emissions from ships: Meta-analysis of modelling approaches and available data sources

    NASA Astrophysics Data System (ADS)

    Miola, Apollonia; Ciuffo, Biagio

    2011-04-01

    Maritime transport plays a central role in the transport sector's sustainability debate. Its contribution to air pollution and greenhouse gases is significant. An effective policy strategy to regulate air emissions requires their robust estimation in terms of quantification and location. This paper provides a critical analysis of the ship emission modelling approaches and data sources available, identifying their limits and constraints. It classifies the main methodologies on the basis of the approach followed (bottom-up or top-down) for the evaluation and geographic characterisation of emissions. The analysis highlights the uncertainty of results from the different methods. This is mainly due to the level of uncertainty connected with the sources of information that are used as inputs to the different studies. This paper describes the sources of the information required for these analyses, paying particular attention to AIS data and to the possible problems associated with their use. One way of reducing the overall uncertainty in the results could be the simultaneous use of different sources of information. This paper presents an alternative methodology based on this approach. As a final remark, it can be expected that new approaches to the problem together with more reliable data sources over the coming years could give more impetus to the debate on the global impact of maritime traffic on the environment that, currently, has only reached agreement via the "consensus" estimates provided by IMO (2009).

  12. A computationally inexpensive model for estimating dimensional measurement uncertainty due to x-ray computed tomography instrument misalignments

    NASA Astrophysics Data System (ADS)

    Ametova, Evelina; Ferrucci, Massimiliano; Chilingaryan, Suren; Dewulf, Wim

    2018-06-01

    The recent emergence of advanced manufacturing techniques such as additive manufacturing and an increased demand on the integrity of components have motivated research on the application of x-ray computed tomography (CT) for dimensional quality control. While CT has shown significant empirical potential for this purpose, there is a need for metrological research to accelerate the acceptance of CT as a measuring instrument. The accuracy in CT-based measurements is vulnerable to the instrument geometrical configuration during data acquisition, namely the relative position and orientation of x-ray source, rotation stage, and detector. Consistency between the actual instrument geometry and the corresponding parameters used in the reconstruction algorithm is critical. Currently available procedures provide users with only estimates of geometrical parameters. Quantification and propagation of uncertainty in the measured geometrical parameters must be considered to provide a complete uncertainty analysis and to establish confidence intervals for CT dimensional measurements. In this paper, we propose a computationally inexpensive model to approximate the influence of errors in CT geometrical parameters on dimensional measurement results. We use surface points extracted from a computer-aided design (CAD) model to model discrepancies in the radiographic image coordinates assigned to the projected edges between an aligned system and a system with misalignments. The efficacy of the proposed method was confirmed on simulated and experimental data in the presence of various geometrical uncertainty contributors.

  13. VAVUQ, Python and Matlab freeware for Verification and Validation, Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Courtney, J. E.; Zamani, K.; Bombardelli, F. A.; Fleenor, W. E.

    2015-12-01

    A package of scripts is presented for automated Verification and Validation (V&V) and Uncertainty Quantification (UQ) for engineering codes that approximate Partial Differential Equations (PDFs). The code post-processes model results to produce V&V and UQ information. This information can be used to assess model performance. Automated information on code performance can allow for a systematic methodology to assess the quality of model approximations. The software implements common and accepted code verification schemes. The software uses the Method of Manufactured Solutions (MMS), the Method of Exact Solution (MES), Cross-Code Verification, and Richardson Extrapolation (RE) for solution (calculation) verification. It also includes common statistical measures that can be used for model skill assessment. Complete RE can be conducted for complex geometries by implementing high-order non-oscillating numerical interpolation schemes within the software. Model approximation uncertainty is quantified by calculating lower and upper bounds of numerical error from the RE results. The software is also able to calculate the Grid Convergence Index (GCI), and to handle adaptive meshes and models that implement mixed order schemes. Four examples are provided to demonstrate the use of the software for code and solution verification, model validation and uncertainty quantification. The software is used for code verification of a mixed-order compact difference heat transport solver; the solution verification of a 2D shallow-water-wave solver for tidal flow modeling in estuaries; the model validation of a two-phase flow computation in a hydraulic jump compared to experimental data; and numerical uncertainty quantification for 3D CFD modeling of the flow patterns in a Gust erosion chamber.

  14. Expert judgement and uncertainty quantification for climate change

    NASA Astrophysics Data System (ADS)

    Oppenheimer, Michael; Little, Christopher M.; Cooke, Roger M.

    2016-05-01

    Expert judgement is an unavoidable element of the process-based numerical models used for climate change projections, and the statistical approaches used to characterize uncertainty across model ensembles. Here, we highlight the need for formalized approaches to unifying numerical modelling with expert judgement in order to facilitate characterization of uncertainty in a reproducible, consistent and transparent fashion. As an example, we use probabilistic inversion, a well-established technique used in many other applications outside of climate change, to fuse two recent analyses of twenty-first century Antarctic ice loss. Probabilistic inversion is but one of many possible approaches to formalizing the role of expert judgement, and the Antarctic ice sheet is only one possible climate-related application. We recommend indicators or signposts that characterize successful science-based uncertainty quantification.

  15. Climate impacts on palm oil yields in the Nigerian Niger Delta

    NASA Astrophysics Data System (ADS)

    Okoro, Stanley U.; Schickhoff, Udo; Boehner, Juergen; Schneider, Uwe A.; Huth, Neil

    2016-04-01

    Palm oil production has increased in recent decades and is estimated to increase further. The optimal role of palm oil production, however, is controversial because of resource conflicts with alternative land uses. Local conditions and climate change affect resource competition and the desirability of palm oil production. Based on this, crop yield simulations using different climate model output under different climate scenarios could be important tool in addressing the problem of uncertainty quantification among different climate model outputs. Previous studies on this region have focused mostly on single experimental fields, not considering variations in Agro-Ecological Zones, climatic conditions, varieties and management practices and, in most cases not extending to various IPCC climate scenarios and were mostly based on single climate model output. Furthermore, the uncertainty quantification of the climate- impact model has rarely been investigated on this region. To this end we use the biophysical simulation model APSIM (Agricultural Production Systems Simulator) to simulate the regional climate impact on oil palm yield over the Nigerian Niger Delta. We also examine whether the use of crop yield model output ensemble reduces the uncertainty rather than the use of climate model output ensemble. The results could serve as a baseline for policy makers in this region in understanding the interaction between potentials of energy crop production of the region as well as its food security and other negative feedbacks that could be associated with bioenergy from oil palm. Keywords: Climate Change, Climate impacts, Land use and Crop yields.

  16. Final Report, DOE Early Career Award: Predictive modeling of complex physical systems: new tools for statistical inference, uncertainty quantification, and experimental design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef

    Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesianmore » inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.« less

  17. Cost collection and analysis for health economic evaluation.

    PubMed

    Smith, Kristine A; Rudmik, Luke

    2013-08-01

    To improve the understanding of common health care cost collection, estimation, analysis, and reporting methodologies. Ovid MEDLINE (1947 to December 2012), Cochrane Central register of Controlled Trials, Database of Systematic Reviews, Health Technology Assessment, and National Health Service Economic Evaluation Database. This article discusses the following cost collection methods: defining relevant resources, quantification of consumed resources, and resource valuation. It outlines the recommendations for cost reporting in economic evaluations and reviews the techniques on how to handle cost data uncertainty. Last, it discusses the controversial topics of future costs and patient productivity losses. Health care cost collection and estimation can be challenging, and an organized approach is required to optimize accuracy of economic evaluation outcomes. Understanding health care cost collection and estimation techniques will improve both critical appraisal and development of future economic evaluations.

  18. Advances in life cycle assessment and emergy evaluation with case studies in gold mining and pineapple production

    NASA Astrophysics Data System (ADS)

    Ingwersen, Wesley W.

    Life cycle assessment (LCA) is an internationally standardized framework for assessing the environmental impacts of products that is rapidly evolving to improve understanding and quantification of how complex product systems depend upon and affect the environment. This dissertation contributes to that evolution through the development of new methods for measuring impacts, estimating the uncertainty of impacts, and measuring ranges of environmental performance, with a focus on product systems in non-OECD countries that have not been well characterized. The integration of a measure of total energy use, emergy, is demonstrated in an LCA of gold from the Yanacocha mine in Peru in the second chapter. A model for estimating the accuracy of emergy results is proposed in the following chapter. The fourth chapter presents a template for LCA-based quantification of the range of environmental performance for tropical agricultural products using the example of fresh pineapple production for export in Costa Rica that can be used to create product labels with environmental information. The final chapter synthesizes how each methodological contribution will together improve the science of measuring product environmental performance.

  19. Assessment of SFR Wire Wrap Simulation Uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delchini, Marc-Olivier G.; Popov, Emilian L.; Pointer, William David

    Predictive modeling and simulation of nuclear reactor performance and fuel are challenging due to the large number of coupled physical phenomena that must be addressed. Models that will be used for design or operational decisions must be analyzed for uncertainty to ascertain impacts to safety or performance. Rigorous, structured uncertainty analyses are performed by characterizing the model’s input uncertainties and then propagating the uncertainties through the model to estimate output uncertainty. This project is part of the ongoing effort to assess modeling uncertainty in Nek5000 simulations of flow configurations relevant to the advanced reactor applications of the Nuclear Energy Advancedmore » Modeling and Simulation (NEAMS) program. Three geometries are under investigation in these preliminary assessments: a 3-D pipe, a 3-D 7-pin bundle, and a single pin from the Thermal-Hydraulic Out-of-Reactor Safety (THORS) facility. Initial efforts have focused on gaining an understanding of Nek5000 modeling options and integrating Nek5000 with Dakota. These tasks are being accomplished by demonstrating the use of Dakota to assess parametric uncertainties in a simple pipe flow problem. This problem is used to optimize performance of the uncertainty quantification strategy and to estimate computational requirements for assessments of complex geometries. A sensitivity analysis to three turbulent models was conducted for a turbulent flow in a single wire wrapped pin (THOR) geometry. Section 2 briefly describes the software tools used in this study and provides appropriate references. Section 3 presents the coupling interface between Dakota and a computational fluid dynamic (CFD) code (Nek5000 or STARCCM+), with details on the workflow, the scripts used for setting up the run, and the scripts used for post-processing the output files. In Section 4, the meshing methods used to generate the THORS and 7-pin bundle meshes are explained. Sections 5, 6 and 7 present numerical results for the 3-D pipe, the single pin THORS mesh, and the 7-pin bundle mesh, respectively.« less

  20. Community Project for Accelerator Science and Simulation (ComPASS) Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cary, John R.; Cowan, Benjamin M.; Veitzer, S. A.

    2016-03-04

    Tech-X participated across the full range of ComPASS activities, with efforts in the Energy Frontier primarily through modeling of laser plasma accelerators and dielectric laser acceleration, in the Intensity Frontier primarily through electron cloud modeling, and in Uncertainty Quantification being applied to dielectric laser acceleration. In the following we present the progress and status of our activities for the entire period of the ComPASS project for the different areas of Energy Frontier, Intensity Frontier and Uncertainty Quantification.

  1. Quantification of the Uncertainties for the Space Launch System Liftoff/Transition and Ascent Databases

    NASA Technical Reports Server (NTRS)

    Favaregh, Amber L.; Houlden, Heather P.; Pinier, Jeremy T.

    2016-01-01

    A detailed description of the uncertainty quantification process for the Space Launch System Block 1 vehicle configuration liftoff/transition and ascent 6-Degree-of-Freedom (DOF) aerodynamic databases is presented. These databases were constructed from wind tunnel test data acquired in the NASA Langley Research Center 14- by 22-Foot Subsonic Wind Tunnel and the Boeing Polysonic Wind Tunnel in St. Louis, MO, respectively. The major sources of error for these databases were experimental error and database modeling errors.

  2. Quantification of the Uncertainties for the Ares I A106 Ascent Aerodynamic Database

    NASA Technical Reports Server (NTRS)

    Houlden, Heather P.; Favaregh, Amber L.

    2010-01-01

    A detailed description of the quantification of uncertainties for the Ares I ascent aero 6-DOF wind tunnel database is presented. The database was constructed from wind tunnel test data and CFD results. The experimental data came from tests conducted in the Boeing Polysonic Wind Tunnel in St. Louis and the Unitary Plan Wind Tunnel at NASA Langley Research Center. The major sources of error for this database were: experimental error (repeatability), database modeling errors, and database interpolation errors.

  3. Case studies in Bayesian microbial risk assessments.

    PubMed

    Kennedy, Marc C; Clough, Helen E; Turner, Joanne

    2009-12-21

    The quantification of uncertainty and variability is a key component of quantitative risk analysis. Recent advances in Bayesian statistics make it ideal for integrating multiple sources of information, of different types and quality, and providing a realistic estimate of the combined uncertainty in the final risk estimates. We present two case studies related to foodborne microbial risks. In the first, we combine models to describe the sequence of events resulting in illness from consumption of milk contaminated with VTEC O157. We used Monte Carlo simulation to propagate uncertainty in some of the inputs to computer models describing the farm and pasteurisation process. Resulting simulated contamination levels were then assigned to consumption events from a dietary survey. Finally we accounted for uncertainty in the dose-response relationship and uncertainty due to limited incidence data to derive uncertainty about yearly incidences of illness in young children. Options for altering the risk were considered by running the model with different hypothetical policy-driven exposure scenarios. In the second case study we illustrate an efficient Bayesian sensitivity analysis for identifying the most important parameters of a complex computer code that simulated VTEC O157 prevalence within a managed dairy herd. This was carried out in 2 stages, first to screen out the unimportant inputs, then to perform a more detailed analysis on the remaining inputs. The method works by building a Bayesian statistical approximation to the computer code using a number of known code input/output pairs (training runs). We estimated that the expected total number of children aged 1.5-4.5 who become ill due to VTEC O157 in milk is 8.6 per year, with 95% uncertainty interval (0,11.5). The most extreme policy we considered was banning on-farm pasteurisation of milk, which reduced the estimate to 6.4 with 95% interval (0,11). In the second case study the effective number of inputs was reduced from 30 to 7 in the screening stage, and just 2 inputs were found to explain 82.8% of the output variance. A combined total of 500 runs of the computer code were used. These case studies illustrate the use of Bayesian statistics to perform detailed uncertainty and sensitivity analyses, integrating multiple information sources in a way that is both rigorous and efficient.

  4. Toward a Multi-City Framework for Urban GHG Estimation in the United States: Methods, Uncertainties, and Future Goals

    NASA Astrophysics Data System (ADS)

    Mueller, K. L.; Callahan, W.; Davis, K. J.; Dickerson, R. R.; Duren, R. M.; Gurney, K. R.; Karion, A.; Keeling, R. F.; Kim, J.; Lauvaux, T.; Miller, C. E.; Shepson, P. B.; Turnbull, J. C.; Weiss, R. F.; Whetstone, J. R.

    2017-12-01

    City and State governments are increasingly interested in mitigating greenhouse gas (GHG) emissions to improve sustainability within their jurisdictions. Estimation of urban GHG emissions remains an active research area with many sources of uncertainty. To support the effort of improving measurement of trace gas emissions in city environments, several federal agencies along with academic, research, and private entities have been working within a handful of domestic metropolitan areas to improve both (1) the assessment of GHG emissions accuracy using a variety of measurement technologies, and (2) the tools that can better assess GHG inventory data at urban mitigation scales based upon these measurements. The National Institute of Standards and Technology (NIST) activities have focused on three areas, or testbeds: Indianapolis (INFLUX experiment), Los Angeles (the LA Megacities project), and the Northeastern Corridor areas encompassing Washington and Baltimore (the NEC/BW GHG Measurements project). These cities represent diverse meteorological, terrain, demographic, and emissions characteristics having a broad range of complexities. To date this research has involved multiple measurement systems and integrated observing approaches, all aimed at advancing development of a robust, science-base upon which higher accuracy quantification approaches can rest. Progress toward such scientifically robust, widely-accepted emissions quantification methods will rely upon continuous performance assessment. Such assessment is challenged by the complexities of cities themselves (e.g., population, urban form) along with the many variables impacting a city's technological ability to estimate its GHG emissions (e.g., meteorology, density of observations). We present the different NIST testbeds and a proposal to initiate conceptual development of a reference framework supporting the comparison of multi-city GHG emissions estimates. Such a reference framework has potential to provide the basis for city governments to choose the measurements and methods that can quantify their GHG and related trace gas emissions at levels commensurate with their needs.

  5. Model Update of a Micro Air Vehicle (MAV) Flexible Wing Frame with Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Reaves, Mercedes C.; Horta, Lucas G.; Waszak, Martin R.; Morgan, Benjamin G.

    2004-01-01

    This paper describes a procedure to update parameters in the finite element model of a Micro Air Vehicle (MAV) to improve displacement predictions under aerodynamics loads. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with Multidisciplinary Design Optimization (MDO) is used to modify key model parameters. Static test data collected using photogrammetry are used to correlate with model predictions. Results show significant improvements in model predictions after parameters are updated; however, computed probabilities values indicate low confidence in updated values and/or model structure errors. Lessons learned in the areas of wing design, test procedures, modeling approaches with geometric nonlinearities, and uncertainties quantification are all documented.

  6. Assessment of parametric uncertainty for groundwater reactive transport modeling,

    USGS Publications Warehouse

    Shi, Xiaoqing; Ye, Ming; Curtis, Gary P.; Miller, Geoffery L.; Meyer, Philip D.; Kohler, Matthias; Yabusaki, Steve; Wu, Jichun

    2014-01-01

    The validity of using Gaussian assumptions for model residuals in uncertainty quantification of a groundwater reactive transport model was evaluated in this study. Least squares regression methods explicitly assume Gaussian residuals, and the assumption leads to Gaussian likelihood functions, model parameters, and model predictions. While the Bayesian methods do not explicitly require the Gaussian assumption, Gaussian residuals are widely used. This paper shows that the residuals of the reactive transport model are non-Gaussian, heteroscedastic, and correlated in time; characterizing them requires using a generalized likelihood function such as the formal generalized likelihood function developed by Schoups and Vrugt (2010). For the surface complexation model considered in this study for simulating uranium reactive transport in groundwater, parametric uncertainty is quantified using the least squares regression methods and Bayesian methods with both Gaussian and formal generalized likelihood functions. While the least squares methods and Bayesian methods with Gaussian likelihood function produce similar Gaussian parameter distributions, the parameter distributions of Bayesian uncertainty quantification using the formal generalized likelihood function are non-Gaussian. In addition, predictive performance of formal generalized likelihood function is superior to that of least squares regression and Bayesian methods with Gaussian likelihood function. The Bayesian uncertainty quantification is conducted using the differential evolution adaptive metropolis (DREAM(zs)) algorithm; as a Markov chain Monte Carlo (MCMC) method, it is a robust tool for quantifying uncertainty in groundwater reactive transport models. For the surface complexation model, the regression-based local sensitivity analysis and Morris- and DREAM(ZS)-based global sensitivity analysis yield almost identical ranking of parameter importance. The uncertainty analysis may help select appropriate likelihood functions, improve model calibration, and reduce predictive uncertainty in other groundwater reactive transport and environmental modeling.

  7. Determination of the uncertainties in radiation doses from ingestion of strontium-90

    NASA Astrophysics Data System (ADS)

    Apostoaei, Andrei Iulian

    Quantification of the uncertainties in the internal dosimetry is important because it can impact the outcome of dose reconstruction, risk assessment or epidemiological studies. This research focused on determination of the uncertainties in the dose factors from a single ingestion of 90Sr by adults, and analyzed the changes with age and the effect of gender. The uncertainties in the estimated dose factors are a factor of 6 for the bone surface, 5 for the red bone marrow, 2.5 for bladder and stomach, 2.2 for the small intestine, 2.1 for the upper large intestine and 2.7 for the lower large intestine. For the rest of the organs the uncertainty is a factor of 3. Only four parameters of the biokinetic model showed an age-dependency within the adult age group: the fractional transfers of strontium from plasma to cortical and trabecular bone, and the removal rates from the cortical and trabecular bone, respectively. When age-dependent biokinetic parameters were used, the estimated dose-factors are very close to the dose factors obtained using age-independent kinetics (within 40%). Thus, the dose factors based on age-independent parameters should suffice for most practical purposes. The dose factors and the associated uncertainties were also calculated as a function of age-at-exposure and attained age. These age dependent curves can be used for estimating doses from continuous intakes, or doses delivered over a limited portion of time. In addition to the committed dose, an expected dose is also estimated in this work. The expected dose is calculated using the dose rate weighted by the probability of surviving up to the age when the dose-rate is delivered. For exposure at young ages the expected dose and the committed dose are similar, but the committed dose decreases to zero when exposure occurs close to age 70, while the expected dose has elevated values pass age 70. No gender differences were found for bone surface, for red bone marrow, and the large intestine. The doses to the soft tissues for females are larger by 20% than the doses for males, because of the differences in the whole-body mass between males and females.

  8. Computational solution verification and validation applied to a thermal model of a ruggedized instrumentation package

    DOE PAGES

    Scott, Sarah Nicole; Templeton, Jeremy Alan; Hough, Patricia Diane; ...

    2014-01-01

    This study details a methodology for quantification of errors and uncertainties of a finite element heat transfer model applied to a Ruggedized Instrumentation Package (RIP). The proposed verification and validation (V&V) process includes solution verification to examine errors associated with the code's solution techniques, and model validation to assess the model's predictive capability for quantities of interest. The model was subjected to mesh resolution and numerical parameters sensitivity studies to determine reasonable parameter values and to understand how they change the overall model response and performance criteria. To facilitate quantification of the uncertainty associated with the mesh, automatic meshing andmore » mesh refining/coarsening algorithms were created and implemented on the complex geometry of the RIP. Automated software to vary model inputs was also developed to determine the solution’s sensitivity to numerical and physical parameters. The model was compared with an experiment to demonstrate its accuracy and determine the importance of both modelled and unmodelled physics in quantifying the results' uncertainty. An emphasis is placed on automating the V&V process to enable uncertainty quantification within tight development schedules.« less

  9. Uncertainty analysis of hydrological modeling in a tropical area using different algorithms

    NASA Astrophysics Data System (ADS)

    Rafiei Emam, Ammar; Kappas, Martin; Fassnacht, Steven; Linh, Nguyen Hoang Khanh

    2018-01-01

    Hydrological modeling outputs are subject to uncertainty resulting from different sources of errors (e.g., error in input data, model structure, and model parameters), making quantification of uncertainty in hydrological modeling imperative and meant to improve reliability of modeling results. The uncertainty analysis must solve difficulties in calibration of hydrological models, which further increase in areas with data scarcity. The purpose of this study is to apply four uncertainty analysis algorithms to a semi-distributed hydrological model, quantifying different source of uncertainties (especially parameter uncertainty) and evaluate their performance. In this study, the Soil and Water Assessment Tools (SWAT) eco-hydrological model was implemented for the watershed in the center of Vietnam. The sensitivity of parameters was analyzed, and the model was calibrated. The uncertainty analysis for the hydrological model was conducted based on four algorithms: Generalized Likelihood Uncertainty Estimation (GLUE), Sequential Uncertainty Fitting (SUFI), Parameter Solution method (ParaSol) and Particle Swarm Optimization (PSO). The performance of the algorithms was compared using P-factor and Rfactor, coefficient of determination (R 2), the Nash Sutcliffe coefficient of efficiency (NSE) and Percent Bias (PBIAS). The results showed the high performance of SUFI and PSO with P-factor>0.83, R-factor <0.56 and R 2>0.91, NSE>0.89, and 0.18

  10. National-scale aboveground biomass geostatistical mapping with FIA inventory and GLAS data: Preparation for sparsely sampled lidar assisted forest inventory

    NASA Astrophysics Data System (ADS)

    Babcock, C. R.; Finley, A. O.; Andersen, H. E.; Moskal, L. M.; Morton, D. C.; Cook, B.; Nelson, R.

    2017-12-01

    Upcoming satellite lidar missions, such as GEDI and IceSat-2, are designed to collect laser altimetry data from space for narrow bands along orbital tracts. As a result lidar metric sets derived from these sources will not be of complete spatial coverage. This lack of complete coverage, or sparsity, means traditional regression approaches that consider lidar metrics as explanatory variables (without error) cannot be used to generate wall-to-wall maps of forest inventory variables. We implement a coregionalization framework to jointly model sparsely sampled lidar information and point-referenced forest variable measurements to create wall-to-wall maps with full probabilistic uncertainty quantification of all inputs. We inform the model with USFS Forest Inventory and Analysis (FIA) in-situ forest measurements and GLAS lidar data to spatially predict aboveground forest biomass (AGB) across the contiguous US. We cast our model within a Bayesian hierarchical framework to better model complex space-varying correlation structures among the lidar metrics and FIA data, which yields improved prediction and uncertainty assessment. To circumvent computational difficulties that arise when fitting complex geostatistical models to massive datasets, we use a Nearest Neighbor Gaussian process (NNGP) prior. Results indicate that a coregionalization modeling approach to leveraging sampled lidar data to improve AGB estimation is effective. Further, fitting the coregionalization model within a Bayesian mode of inference allows for AGB quantification across scales ranging from individual pixel estimates of AGB density to total AGB for the continental US with uncertainty. The coregionalization framework examined here is directly applicable to future spaceborne lidar acquisitions from GEDI and IceSat-2. Pairing these lidar sources with the extensive FIA forest monitoring plot network using a joint prediction framework, such as the coregionalization model explored here, offers the potential to improve forest AGB accounting certainty and provide maps for post-model fitting analysis of the spatial distribution of AGB.

  11. Studying emissions of CO2 in the Baltimore/Washington area using airborne measurements: source attribution, flux quantification, and model comparison

    NASA Astrophysics Data System (ADS)

    Ahn, D.; Hansford, J. R.; Salawitch, R. J.; Ren, X.; Cohen, M.; Karion, A.; Whetstone, J. R.; Salmon, O. E.; Shepson, P. B.; Gurney, K. R.; Osterman, G. B.; Dickerson, R. R.

    2017-12-01

    We study emissions of CO2 in the Baltimore-Washington area using airborne in-situ measurements, obtained during the February 2015 Fluxes of Greenhouse Gases in Maryland (FLAGG-MD) campaign. In this study, we attributed enhanced signals of CO2 to several power plants and two urban areas (Baltimore City and Washington, DC), using the NOAA HYSPLIT air parcel trajectory model as well as the analysis of chemical ratios to quantify the source/receptor relationship. Then, the fluxes of attributed CO2 are estimated using a mass balance approach. The uncertainty in the aircraft-based mass balance approach is estimated by conducting a detailed sensitivity analysis of CO2 fluxes, considering factors such as the background mixing ratio of CO2, wind direction and speed, PBL heights, the horizontal boundary, and vertical interpolation methods. Estimated fluxes of CO2 with estimated uncertainty ranges are then compared to output from various emissions data and models, such as CEMS, CarbonTracker, FFDAS, and ODIAC. Finally, column CO2 data over the Baltimore-Washington region observed by the OCO-2 satellite instrument are statistically compared to aircraft in-situ observations, to assess how well OCO-2 is able to quantify geographic and synoptic-scale variability.

  12. On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl

    2016-09-01

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.

  13. The roughness of grounded ice sheet beds: Case studies from high resolution radio echo sounding studies in Antarctica

    NASA Astrophysics Data System (ADS)

    Young, Duncan; Blankeship, Donald; Beem, Lucas; Cavitte, Marie; Quartini, Enrica; Lindzey, Laura; Jackson, Charles; Roberts, Jason; Ritz, Catherine; Siegert, Martin; Greenbaum, Jamin; Frederick, Bruce

    2017-04-01

    The roughness of subglacial interfaces (as measured by airborne radar echo sounding) at length scales between profile line spacing and the footprint of the instrument is a key, but complex, signature of glacial and geomorphic processes, material lithology and integrated history at the bed of ice sheets. Subglacial roughness is also intertwined with assessments of ice thickness uncertainty using radar echo sounding, the utility of interpolation methodologies, and a key aspect of subglacial assess strategies. Here we present an assessment of subglacial roughness estimation in both West and East Antarctica, and compare this to exposed subglacial terrains. We will use recent high resolution aerogeophysical surveys to examine what variations in roughness are a fingerprint for, assess the limits of ice thickness uncertainty quantification and compare strategies for roughness assessment and utilization.

  14. Monte Carlo Uncertainty Quantification for an Unattended Enrichment Monitor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarman, Kenneth D.; Smith, Leon E.; Wittman, Richard S.

    As a case study for uncertainty analysis, we consider a model flow monitor for measuring enrichment in gas centrifuge enrichment plants (GCEPs) that could provide continuous monitoring of all declared gas flow and provide high-accuracy gas enrichment estimates as a function of time. The monitor system could include NaI(Tl) gamma-ray spectrometers, a pressure signal-sharing device to be installed on an operator\\rq{}s pressure gauge or a dedicated inspector pressure sensor, and temperature sensors attached to the outside of the header pipe, to provide pressure, temperature, and gamma-ray spectra measurements of UFmore » $$_6$$ gas flow through unit header pipes. Our study builds on previous modeling and analysis methods development for enrichment monitor concepts and a software tool that was developed at Oak Ridge National Laboratory to generate and analyze synthetic data.« less

  15. The U/Th production ratio and the age of the Milky Way from meteorites and Galactic halo stars

    NASA Astrophysics Data System (ADS)

    Dauphas, Nicolas

    2005-06-01

    Some heavy elements (with atomic number A > 69) are produced by the `rapid' (r)-process of nucleosynthesis, where lighter elements are bombarded with a massive flux of neutrons. Although this is characteristic of supernovae and neutron star mergers, uncertainties in where the r-process occurs persist because stellar models are too crude to allow precise quantification of this phenomenon. As a result, there are many uncertainties and assumptions in the models used to calculate the production ratios of actinides (like uranium-238 and thorium-232). Current estimates of the U/Th production ratio range from ~0.4 to 0.7. Here I show that the U/Th abundance ratio in meteorites can be used, in conjunction with observations of low-metallicity stars in the halo of the Milky Way, to determine the U/Th production ratio very precisely . This value can be used in future studies to constrain the possible nuclear mass formulae used in r-process calculations, to help determine the source of Galactic cosmic rays, and to date circumstellar grains. I also estimate the age of the Milky Way ( in a way that is independent of the uncertainties associated with fluctuations in the microwave background or models of stellar evolution.

  16. Determination of aminopolycarboxylic acids in river water by solid-phase extraction on activated charcoal cartridges and gas chromatography with mass spectrometric detection. Method performance characteristics and estimation of the uncertainty.

    PubMed

    Jiménez, Juan J

    2013-04-03

    A new sample preparation procedure to determine aminopolycarboxylic acids (ethylenediaminetetraacetic acid, EDTA, nitrilotriacetic acid, NTA, diethylenetriaminepentaacetic acid, DTPA, and cyclohexanediaminetetraacetic acid, CDTA) in river water is described. The procedure consists of the solid-phase extraction of the aminopolycaroxyllic acids on activated charcoal cartridges after increasing the ionic strength and acidifying the sample. The extract was eluted with methanol and the analytes were methylated in presence of BF3/methanol to determine them by GC with mass spectrometric detection. Recoveries were higher than 90% with good repeatabilities and inter-day precision for concentrations close to quantification limits (about 10 μg L(-1)) and higher. It has been verified that the proposed method is robust according to the Youden and Steiner test and free of matrix effects arisen from the presence of organic matter and iron(III) as deduced from statistical tests. A bottom-up approach was followed to estimate the uncertainty of the measured concentration. At concentrations close to 10 μg L(-1) the most relevant step of the method is the calculus of the interpolated concentration which has a high value of relative standard uncertainty. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, Tom; Croft, Stephen; Jarman, Kenneth D.

    The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed andmore » achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.« less

  18. Toward best practice framing of uncertainty in scientific publications: A review of Water Resources Research abstracts

    NASA Astrophysics Data System (ADS)

    Guillaume, Joseph H. A.; Helgeson, Casey; Elsawah, Sondoss; Jakeman, Anthony J.; Kummu, Matti

    2017-08-01

    Uncertainty is recognized as a key issue in water resources research, among other sciences. Discussions of uncertainty typically focus on tools and techniques applied within an analysis, e.g., uncertainty quantification and model validation. But uncertainty is also addressed outside the analysis, in writing scientific publications. The language that authors use conveys their perspective of the role of uncertainty when interpreting a claim—what we call here "framing" the uncertainty. This article promotes awareness of uncertainty framing in four ways. (1) It proposes a typology of eighteen uncertainty frames, addressing five questions about uncertainty. (2) It describes the context in which uncertainty framing occurs. This is an interdisciplinary topic, involving philosophy of science, science studies, linguistics, rhetoric, and argumentation. (3) We analyze the use of uncertainty frames in a sample of 177 abstracts from the Water Resources Research journal in 2015. This helped develop and tentatively verify the typology, and provides a snapshot of current practice. (4) We make provocative recommendations to achieve a more influential, dynamic science. Current practice in uncertainty framing might be described as carefully considered incremental science. In addition to uncertainty quantification and degree of belief (present in ˜5% of abstracts), uncertainty is addressed by a combination of limiting scope, deferring to further work (˜25%) and indicating evidence is sufficient (˜40%)—or uncertainty is completely ignored (˜8%). There is a need for public debate within our discipline to decide in what context different uncertainty frames are appropriate. Uncertainty framing cannot remain a hidden practice evaluated only by lone reviewers.

  19. La coherence conceptuelle d'etudiants collegiaux en mecanique Newtonienne et en metrologie

    NASA Astrophysics Data System (ADS)

    Periard, Martin

    This thesis evaluates the coherence of the conceptual network demonstrated by college students in life and applied sciences. This evaluation was based on the analysis of Burt tables issuing from multiple choice questionnaires, on the creation and careful examination of a novel tool, the matrix of specific discrimination coefficients, which will be described in the main text, and on the qualitative analysis of actual laboratory work of students doing an experimentation. At the completion of this project, four research axis have been explored. (1) What is the conceptual coherence demonstrated in Newtonian mechanics? (2) Is the mastery of uncertainty quantification related to the development of logical thinking or to mathematical competency? (3) What is the conceptual coherence demonstrated in the quantification of experimental uncertainty? (4) What are the concrete procedures utilized by students to quantify experimental uncertainty in a semi-directed laboratory context? The main conclusions that emerged from each axis of research can be summerized as follow. (1) The most prevalent erroneous conceptions are not solidly set in a rigid conceptual network. For example, a student successful in a question about Newton's third law (the most difficult subject of the Force Concept Inventory) is just slightly more likely to succeed in another related question than the other participants. Many pairs of questions displays a negative specific discrimination coefficient demonstrating a weak conceptual coherence in pre-test and a somewhat ameliorated conceptual coherence in post-test. (2) If a small proportion of students has demonstrated marked deficiencies in questions related with control of variable and in those related to the relationship between the graphical display of experimental data and a mathematical model, the majority of students can be considered as adequately mastering those subjects. However, almost every student demonstrated a lack of mastery of concepts underlying the quantification of experimental uncertainty and the propagation of uncertainty (heretofore referred to as metrology). No statistically significant correlation has been observed between the three main topics suggesting that they are largely independent cognitive abilities. Burt table has demonstrated a greater degree of conceptual coherence between control of variables questions than suggested by Pearson correlation coefficients. Equivalent question in the topic of metrology did not permit to demonstrate a clear conceptual coherence. (3) Analysis of a questionnaire entirely devoted to metrology has shown erroneous conceptions caused by prior learning (didactical obstacles), erroneous conceptions based on intuitive models and a lack of global comprehension of metrological concepts although some appear to be almost acquired. (4) When doing real experiments in semi-directed laboratory, students demonstrated the same difficulty identified in the questionnaire of 3) which could interpreted as corroborating previously obtained results. However, many unanticipated behaviors related to measurement were observed that could not have been anticipated solely by analyzing answers in the multiple-choice questionnaire. Interviews immediately following each semi-directed laboratory permitted the participants to detail certain aspects of their metrological methodology. Most notably, the use of repeated measurement strategies, their "spontaneous" strategies to quantify uncertainty, and their explanation of numerical estimates of reading uncertainties. Overall, uncertainty propagation algorithms were adequately employed. Many erroneous metrological conceptions seem to resist strongly to be modified by learning. Among others, assignation of the resolution of a digital scale as the uncertainty value and the lack of stacking strategies to diminish uncertainty. The conception that a numerical value cannot be more precise than the tolerance of an instrument seems firmly set. Key words. Burt tables, conceptual coherence, experimental uncertainty, laboratories, metrology, Newtonian mechanics, uncertainty propagation.

  20. A two-stage adaptive stochastic collocation method on nested sparse grids for multiphase flow in randomly heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi

    2017-02-01

    A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.

  1. What Factors Influence Women's Perceptions of their Systemic Recurrence Risk after Breast Cancer Treatment?

    PubMed

    Lee, Kamaria L; Janz, Nancy K; Zikmund-Fisher, Brian J; Jagsi, Reshma; Wallner, Lauren P; Kurian, Allison W; Katz, Steven J; Abrahamse, Paul; Hawley, Sarah T

    2018-01-01

    Breast cancer patients' misunderstanding of their systemic cancer recurrence risk has consequences on decision-making and quality of life. Little is known about how women derive their risk estimates. Using Los Angeles and Georgia's SEER registries (2014-2015), a random sample of early-stage breast cancer patients was sent surveys about 2 to 3 months after surgery ( N = 3930; RR, 68%). We conducted an inductive thematic analysis of open-ended responses about why women chose their risk estimates in a uniquely large sub-sample ( N = 1,754). Clinician estimates of systemic recurrence risk were provided for patient sub-groups with DCIS and with low-, intermediate-, and high-risk invasive disease. Women's perceived risk of systemic recurrence (0% to 100%) was categorized as overestimation, reasonably accurate estimation, or underestimation (0% for invasive disease) and was compared across identified factors and by clinical presentation. Women identified 9 main factors related to their clinical experience (e.g., diagnosis and testing; treatment) and non-clinical beliefs (e.g., uncertainty; spirituality). Women who mentioned at least one clinical experience factor were significantly less likely to overestimate their risk (12% v. 43%, P < 0.001). Most women who were influenced by "communication with a clinician" had reasonably accurate recurrence estimates (68%). "Uncertainty" and "family and personal history" were associated with overestimation, particularly for women with DCIS (75%; 84%). "Spirituality, religion, and faith" was associated with an underestimation of risk (63% v. 20%, P < 0.001). The quantification of our qualitative results is subject to any biases that may have occurred during the coding process despite rigorous methodology. Patient-clinician communication is important for breast cancer patients' understanding of their numeric risk of systemic recurrence. Clinician discussions about recurrence risk should address uncertainty and relevance of family and personal history.

  2. Near-source mobile methane emission estimates using EPA Method33a and a novel probabilistic approach as a basis for leak quantification in urban areas

    NASA Astrophysics Data System (ADS)

    Albertson, J. D.

    2015-12-01

    Methane emissions from underground pipeline leaks remain an ongoing issue in the development of accurate methane emission inventories for the natural gas supply chain. Application of mobile methods during routine street surveys would help address this issue, but there are large uncertainties in current approaches. In this paper, we describe results from a series of near-source (< 30 m) controlled methane releases where an instrumented van was used to measure methane concentrations during both fixed location sampling and during mobile traverses immediately downwind of the source. The measurements were used to evaluate the application of EPA Method 33A for estimating methane emissions downwind of a source and also to test the application of a new probabilistic approach for estimating emission rates from mobile traverse data.

  3. Uncertainty quantification in flood risk assessment

    NASA Astrophysics Data System (ADS)

    Blöschl, Günter; Hall, Julia; Kiss, Andrea; Parajka, Juraj; Perdigão, Rui A. P.; Rogger, Magdalena; Salinas, José Luis; Viglione, Alberto

    2017-04-01

    Uncertainty is inherent to flood risk assessments because of the complexity of the human-water system, which is characterised by nonlinearities and interdependencies, because of limited knowledge about system properties and because of cognitive biases in human perception and decision-making. On top of the uncertainty associated with the assessment of the existing risk to extreme events, additional uncertainty arises because of temporal changes in the system due to climate change, modifications of the environment, population growth and the associated increase in assets. Novel risk assessment concepts are needed that take into account all these sources of uncertainty. They should be based on the understanding of how flood extremes are generated and how they change over time. They should also account for the dynamics of risk perception of decision makers and population in the floodplains. In this talk we discuss these novel risk assessment concepts through examples from Flood Frequency Hydrology, Socio-Hydrology and Predictions Under Change. We believe that uncertainty quantification in flood risk assessment should lead to a robust approach of integrated flood risk management aiming at enhancing resilience rather than searching for optimal defense strategies.

  4. Uncertainty quantification for PZT bimorph actuators

    NASA Astrophysics Data System (ADS)

    Bravo, Nikolas; Smith, Ralph C.; Crews, John

    2018-03-01

    In this paper, we discuss the development of a high fidelity model for a PZT bimorph actuator used for micro-air vehicles, which includes the Robobee. We developed a high-fidelity model for the actuator using the homogenized energy model (HEM) framework, which quantifies the nonlinear, hysteretic, and rate-dependent behavior inherent to PZT in dynamic operating regimes. We then discussed an inverse problem on the model. We included local and global sensitivity analysis of the parameters in the high-fidelity model. Finally, we will discuss the results of Bayesian inference and uncertainty quantification on the HEM.

  5. Uncertainty Quantification and Statistical Engineering for Hypersonic Entry Applications

    NASA Technical Reports Server (NTRS)

    Cozmuta, Ioana

    2011-01-01

    NASA has invested significant resources in developing and validating a mathematical construct for TPS margin management: a) Tailorable for low/high reliability missions; b) Tailorable for ablative/reusable TPS; c) Uncertainty Quantification and Statistical Engineering are valuable tools not exploited enough; and d) Need to define strategies combining both Theoretical Tools and Experimental Methods. The main reason for this lecture is to give a flavor of where UQ and SE could contribute and hope that the broader community will work with us to improve in these areas.

  6. Tutorial examples for uncertainty quantification methods.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Bord, Sarah

    2015-08-01

    This report details the work accomplished during my 2015 SULI summer internship at Sandia National Laboratories in Livermore, CA. During this internship, I worked on multiple tasks with the common goal of making uncertainty quantification (UQ) methods more accessible to the general scientific community. As part of my work, I created a comprehensive numerical integration example to incorporate into the user manual of a UQ software package. Further, I developed examples involving heat transfer through a window to incorporate into tutorial lectures that serve as an introduction to UQ methods.

  7. Bayesian calibration of mechanistic aquatic biogeochemical models and benefits for environmental management

    NASA Astrophysics Data System (ADS)

    Arhonditsis, George B.; Papantou, Dimitra; Zhang, Weitao; Perhar, Gurbir; Massos, Evangelia; Shi, Molu

    2008-09-01

    Aquatic biogeochemical models have been an indispensable tool for addressing pressing environmental issues, e.g., understanding oceanic response to climate change, elucidation of the interplay between plankton dynamics and atmospheric CO 2 levels, and examination of alternative management schemes for eutrophication control. Their ability to form the scientific basis for environmental management decisions can be undermined by the underlying structural and parametric uncertainty. In this study, we outline how we can attain realistic predictive links between management actions and ecosystem response through a probabilistic framework that accommodates rigorous uncertainty analysis of a variety of error sources, i.e., measurement error, parameter uncertainty, discrepancy between model and natural system. Because model uncertainty analysis essentially aims to quantify the joint probability distribution of model parameters and to make inference about this distribution, we believe that the iterative nature of Bayes' Theorem is a logical means to incorporate existing knowledge and update the joint distribution as new information becomes available. The statistical methodology begins with the characterization of parameter uncertainty in the form of probability distributions, then water quality data are used to update the distributions, and yield posterior parameter estimates along with predictive uncertainty bounds. Our illustration is based on a six state variable (nitrate, ammonium, dissolved organic nitrogen, phytoplankton, zooplankton, and bacteria) ecological model developed for gaining insight into the mechanisms that drive plankton dynamics in a coastal embayment; the Gulf of Gera, Island of Lesvos, Greece. The lack of analytical expressions for the posterior parameter distributions was overcome using Markov chain Monte Carlo simulations; a convenient way to obtain representative samples of parameter values. The Bayesian calibration resulted in realistic reproduction of the key temporal patterns of the system, offered insights into the degree of information the data contain about model inputs, and also allowed the quantification of the dependence structure among the parameter estimates. Finally, our study uses two synthetic datasets to examine the ability of the updated model to provide estimates of predictive uncertainty for water quality variables of environmental management interest.

  8. Health risks of climate change: an assessment of uncertainties and its implications for adaptation policies.

    PubMed

    Wardekker, J Arjan; de Jong, Arie; van Bree, Leendert; Turkenburg, Wim C; van der Sluijs, Jeroen P

    2012-09-19

    Projections of health risks of climate change are surrounded with uncertainties in knowledge. Understanding of these uncertainties will help the selection of appropriate adaptation policies. We made an inventory of conceivable health impacts of climate change, explored the type and level of uncertainty for each impact, and discussed its implications for adaptation policy. A questionnaire-based expert elicitation was performed using an ordinal scoring scale. Experts were asked to indicate the level of precision with which health risks can be estimated, given the present state of knowledge. We assessed the individual scores, the expertise-weighted descriptive statistics, and the argumentation given for each score. Suggestions were made for how dealing with uncertainties could be taken into account in climate change adaptation policy strategies. The results showed that the direction of change could be indicated for most anticipated health effects. For several potential effects, too little knowledge exists to indicate whether any impact will occur, or whether the impact will be positive or negative. For several effects, rough 'order-of-magnitude' estimates were considered possible. Factors limiting health impact quantification include: lack of data, multi-causality, unknown impacts considering a high-quality health system, complex cause-effect relations leading to multi-directional impacts, possible changes of present-day response-relations, and difficulties in predicting local climate impacts. Participants considered heat-related mortality and non-endemic vector-borne diseases particularly relevant for climate change adaptation. For possible climate related health impacts characterised by ignorance, adaptation policies that focus on enhancing the health system's and society's capability of dealing with possible future changes, uncertainties and surprises (e.g. through resilience, flexibility, and adaptive capacity) are most appropriate. For climate related health effects for which rough risk estimates are available, 'robust decision-making' is recommended. For health effects with limited societal and policy relevance, we recommend focusing on no-regret measures. For highly relevant health effects, precautionary measures can be considered. This study indicated that analysing and characterising uncertainty by means of a typology can be a very useful approach for selection and prioritization of preferred adaptation policies to reduce future climate related health risks.

  9. Uncertainty Quantification given Discontinuous Climate Model Response and a Limited Number of Model Runs

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Safta, C.; Debusschere, B.; Najm, H.

    2010-12-01

    Uncertainty quantification in complex climate models is challenged by the sparsity of available climate model predictions due to the high computational cost of model runs. Another feature that prevents classical uncertainty analysis from being readily applicable is bifurcative behavior in climate model response with respect to certain input parameters. A typical example is the Atlantic Meridional Overturning Circulation. The predicted maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. We outline a methodology for uncertainty quantification given discontinuous model response and a limited number of model runs. Our approach is two-fold. First we detect the discontinuity with Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve shape and location for arbitrarily distributed input parameter values. Then, we construct spectral representations of uncertainty, using Polynomial Chaos (PC) expansions on either side of the discontinuity curve, leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification. The approach is enabled by a Rosenblatt transformation that maps each side of the discontinuity to regular domains where desirable orthogonality properties for the spectral bases hold. We obtain PC modes by either orthogonal projection or Bayesian inference, and argue for a hybrid approach that targets a balance between the accuracy provided by the orthogonal projection and the flexibility provided by the Bayesian inference - where the latter allows obtaining reasonable expansions without extra forward model runs. The model output, and its associated uncertainty at specific design points, are then computed by taking an ensemble average over PC expansions corresponding to possible realizations of the discontinuity curve. The methodology is tested on synthetic examples of discontinuous model data with adjustable sharpness and structure. This work was supported by the Sandia National Laboratories Seniors’ Council LDRD (Laboratory Directed Research and Development) program. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.

  10. Quantifying uncertainty due to fission-fusion dynamics as a component of social complexity.

    PubMed

    Ramos-Fernandez, Gabriel; King, Andrew J; Beehner, Jacinta C; Bergman, Thore J; Crofoot, Margaret C; Di Fiore, Anthony; Lehmann, Julia; Schaffner, Colleen M; Snyder-Mackler, Noah; Zuberbühler, Klaus; Aureli, Filippo; Boyer, Denis

    2018-05-30

    Groups of animals (including humans) may show flexible grouping patterns, in which temporary aggregations or subgroups come together and split, changing composition over short temporal scales, (i.e. fission and fusion). A high degree of fission-fusion dynamics may constrain the regulation of social relationships, introducing uncertainty in interactions between group members. Here we use Shannon's entropy to quantify the predictability of subgroup composition for three species known to differ in the way their subgroups come together and split over time: spider monkeys ( Ateles geoffroyi ), chimpanzees ( Pan troglodytes ) and geladas ( Theropithecus gelada ). We formulate a random expectation of entropy that considers subgroup size variation and sample size, against which the observed entropy in subgroup composition can be compared. Using the theory of set partitioning, we also develop a method to estimate the number of subgroups that the group is likely to be divided into, based on the composition and size of single focal subgroups. Our results indicate that Shannon's entropy and the estimated number of subgroups present at a given time provide quantitative metrics of uncertainty in the social environment (within which social relationships must be regulated) for groups with different degrees of fission-fusion dynamics. These metrics also represent an indirect quantification of the cognitive challenges posed by socially dynamic environments. Overall, our novel methodological approach provides new insight for understanding the evolution of social complexity and the mechanisms to cope with the uncertainty that results from fission-fusion dynamics. © 2017 The Author(s).

  11. Pareto-optimal estimates that constrain mean California precipitation change

    NASA Astrophysics Data System (ADS)

    Langenbrunner, B.; Neelin, J. D.

    2017-12-01

    Global climate model (GCM) projections of greenhouse gas-induced precipitation change can exhibit notable uncertainty at the regional scale, particularly in regions where the mean change is small compared to internal variability. This is especially true for California, which is located in a transition zone between robust precipitation increases to the north and decreases to the south, and where GCMs from the Climate Model Intercomparison Project phase 5 (CMIP5) archive show no consensus on mean change (in either magnitude or sign) across the central and southern parts of the state. With the goal of constraining this uncertainty, we apply a multiobjective approach to a large set of subensembles (subsets of models from the full CMIP5 ensemble). These constraints are based on subensemble performance in three fields important to California precipitation: tropical Pacific sea surface temperatures, upper-level zonal winds in the midlatitude Pacific, and precipitation over the state. An evolutionary algorithm is used to sort through and identify the set of Pareto-optimal subensembles across these three measures in the historical climatology, and we use this information to constrain end-of-century California wet season precipitation change. This technique narrows the range of projections throughout the state and increases confidence in estimates of positive mean change. Furthermore, these methods complement and generalize emergent constraint approaches that aim to restrict uncertainty in end-of-century projections, and they have applications to even broader aspects of uncertainty quantification, including parameter sensitivity and model calibration.

  12. Elucidating Inherent Uncertainties in Data Assimilation for Predictions Incorporating Non-stationary Processes - Focus on Predictive Phenology

    NASA Astrophysics Data System (ADS)

    Lowman, L.; Barros, A. P.

    2017-12-01

    Data assimilation (DA) is the widely accepted procedure for estimating parameters within predictive models because of the adaptability and uncertainty quantification offered by Bayesian methods. DA applications in phenology modeling offer critical insights into how extreme weather or changes in climate impact the vegetation life cycle. Changes in leaf onset and senescence, root phenology, and intermittent leaf shedding imply large changes in the surface radiative, water, and carbon budgets at multiple scales. Models of leaf phenology require concurrent atmospheric and soil conditions to determine how biophysical plant properties respond to changes in temperature, light and water demand. Presently, climatological records for fraction of photosynthetically active radiation (FPAR) and leaf area index (LAI), the modelled states indicative of plant phenology, are not available. Further, DA models are typically trained on short periods of record (e.g. less than 10 years). Using limited records with a DA framework imposes non-stationarity on estimated parameters and the resulting predicted model states. This talk discusses how uncertainty introduced by the inherent non-stationarity of the modeled processes propagates through a land-surface hydrology model coupled to a predictive phenology model. How water demand is accounted for in the upscaling of DA model inputs and analysis period serves as a key source of uncertainty in the FPAR and LAI predictions. Parameters estimated from different DA effectively calibrate a plant water-use strategy within the land-surface hydrology model. For example, when extreme droughts are included in the DA period, the plants are trained to uptake water, transpire, and assimilate carbon under favorable conditions and quickly shut down at the onset of water stress.

  13. Scalable Methods for Uncertainty Quantification, Data Assimilation and Target Accuracy Assessment for Multi-Physics Advanced Simulation of Light Water Reactors

    NASA Astrophysics Data System (ADS)

    Khuwaileh, Bassam

    High fidelity simulation of nuclear reactors entails large scale applications characterized with high dimensionality and tremendous complexity where various physics models are integrated in the form of coupled models (e.g. neutronic with thermal-hydraulic feedback). Each of the coupled modules represents a high fidelity formulation of the first principles governing the physics of interest. Therefore, new developments in high fidelity multi-physics simulation and the corresponding sensitivity/uncertainty quantification analysis are paramount to the development and competitiveness of reactors achieved through enhanced understanding of the design and safety margins. Accordingly, this dissertation introduces efficient and scalable algorithms for performing efficient Uncertainty Quantification (UQ), Data Assimilation (DA) and Target Accuracy Assessment (TAA) for large scale, multi-physics reactor design and safety problems. This dissertation builds upon previous efforts for adaptive core simulation and reduced order modeling algorithms and extends these efforts towards coupled multi-physics models with feedback. The core idea is to recast the reactor physics analysis in terms of reduced order models. This can be achieved via identifying the important/influential degrees of freedom (DoF) via the subspace analysis, such that the required analysis can be recast by considering the important DoF only. In this dissertation, efficient algorithms for lower dimensional subspace construction have been developed for single physics and multi-physics applications with feedback. Then the reduced subspace is used to solve realistic, large scale forward (UQ) and inverse problems (DA and TAA). Once the elite set of DoF is determined, the uncertainty/sensitivity/target accuracy assessment and data assimilation analysis can be performed accurately and efficiently for large scale, high dimensional multi-physics nuclear engineering applications. Hence, in this work a Karhunen-Loeve (KL) based algorithm previously developed to quantify the uncertainty for single physics models is extended for large scale multi-physics coupled problems with feedback effect. Moreover, a non-linear surrogate based UQ approach is developed, used and compared to performance of the KL approach and brute force Monte Carlo (MC) approach. On the other hand, an efficient Data Assimilation (DA) algorithm is developed to assess information about model's parameters: nuclear data cross-sections and thermal-hydraulics parameters. Two improvements are introduced in order to perform DA on the high dimensional problems. First, a goal-oriented surrogate model can be used to replace the original models in the depletion sequence (MPACT -- COBRA-TF - ORIGEN). Second, approximating the complex and high dimensional solution space with a lower dimensional subspace makes the sampling process necessary for DA possible for high dimensional problems. Moreover, safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. Accordingly, an inverse problem can be defined and solved to assess the contributions from sources of uncertainty; and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this dissertation a subspace-based gradient-free and nonlinear algorithm for inverse uncertainty quantification namely the Target Accuracy Assessment (TAA) has been developed and tested. The ideas proposed in this dissertation were first validated using lattice physics applications simulated using SCALE6.1 package (Pressurized Water Reactor (PWR) and Boiling Water Reactor (BWR) lattice models). Ultimately, the algorithms proposed her were applied to perform UQ and DA for assembly level (CASL progression problem number 6) and core wide problems representing Watts Bar Nuclear 1 (WBN1) for cycle 1 of depletion (CASL Progression Problem Number 9) modeled via simulated using VERA-CS which consists of several multi-physics coupled models. The analysis and algorithms developed in this dissertation were encoded and implemented in a newly developed tool kit algorithms for Reduced Order Modeling based Uncertainty/Sensitivity Estimator (ROMUSE).

  14. Uncertainty Quantification in Aeroelasticity

    NASA Astrophysics Data System (ADS)

    Beran, Philip; Stanford, Bret; Schrock, Christopher

    2017-01-01

    Physical interactions between a fluid and structure, potentially manifested as self-sustained or divergent oscillations, can be sensitive to many parameters whose values are uncertain. Of interest here are aircraft aeroelastic interactions, which must be accounted for in aircraft certification and design. Deterministic prediction of these aeroelastic behaviors can be difficult owing to physical and computational complexity. New challenges are introduced when physical parameters and elements of the modeling process are uncertain. By viewing aeroelasticity through a nondeterministic prism, where key quantities are assumed stochastic, one may gain insights into how to reduce system uncertainty, increase system robustness, and maintain aeroelastic safety. This article reviews uncertainty quantification in aeroelasticity using traditional analytical techniques not reliant on computational fluid dynamics; compares and contrasts this work with emerging methods based on computational fluid dynamics, which target richer physics; and reviews the state of the art in aeroelastic optimization under uncertainty. Barriers to continued progress, for example, the so-called curse of dimensionality, are discussed.

  15. Information transduction capacity reduces the uncertainties in annotation-free isoform discovery and quantification

    PubMed Central

    Deng, Yue; Bao, Feng; Yang, Yang; Ji, Xiangyang; Du, Mulong; Zhang, Zhengdong

    2017-01-01

    Abstract The automated transcript discovery and quantification of high-throughput RNA sequencing (RNA-seq) data are important tasks of next-generation sequencing (NGS) research. However, these tasks are challenging due to the uncertainties that arise in the inference of complete splicing isoform variants from partially observed short reads. Here, we address this problem by explicitly reducing the inherent uncertainties in a biological system caused by missing information. In our approach, the RNA-seq procedure for transforming transcripts into short reads is considered an information transmission process. Consequently, the data uncertainties are substantially reduced by exploiting the information transduction capacity of information theory. The experimental results obtained from the analyses of simulated datasets and RNA-seq datasets from cell lines and tissues demonstrate the advantages of our method over state-of-the-art competitors. Our algorithm is an open-source implementation of MaxInfo. PMID:28911101

  16. WE-H-207A-06: Hypoxia Quantification in Static PET Images: The Signal in the Noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, H; Yeung, I; Milosevic, M

    2016-06-15

    Purpose: Quantification of hypoxia from PET images is of considerable clinical interest. In the absence of dynamic PET imaging the hypoxic fraction (HF) of a tumor has to be estimated from voxel values of activity concentration of a radioactive hypoxia tracer. This work is part of an effort to standardize quantification of tumor hypoxic fraction from PET images. Methods: A simple hypoxia imaging model in the tumor was developed. The distribution of the tracer activity was described as the sum of two different probability distributions, one for the normoxic (and necrotic), the other for the hypoxic voxels. The widths ofmore » the distributions arise due to variability of the transport, tumor tissue inhomogeneity, tracer binding kinetics, and due to PET image noise. Quantification of HF was performed for various levels of variability using two different methodologies: a) classification thresholds between normoxic and hypoxic voxels based on a non-hypoxic surrogate (muscle), and b) estimation of the (posterior) probability distributions based on maximizing likelihood optimization that does not require a surrogate. Data from the hypoxia imaging model and from 27 cervical cancer patients enrolled in a FAZA PET study were analyzed. Results: In the model, where the true value of HF is known, thresholds usually underestimate the value for large variability. For the patients, a significant uncertainty of the HF values (an average intra-patient range of 17%) was caused by spatial non-uniformity of image noise which is a hallmark of all PET images. Maximum likelihood estimation (MLE) is able to directly optimize for the weights of both distributions, however, may suffer from poor optimization convergence. For some patients, MLE-based HF values showed significant differences to threshold-based HF-values. Conclusion: HF-values depend critically on the magnitude of the different sources of tracer uptake variability. A measure of confidence should also be reported.« less

  17. Design and Use of a Full Flow Sampling System (FFS) for the Quantification of Methane Emissions

    PubMed Central

    Johnson, Derek R.; Covington, April N.; Clark, Nigel N.

    2016-01-01

    The use of natural gas continues to grow with increased discovery and production of unconventional shale resources. At the same time, the natural gas industry faces continued scrutiny for methane emissions from across the supply chain, due to methane's relatively high global warming potential (25-84x that of carbon dioxide, according to the Energy Information Administration). Currently, a variety of techniques of varied uncertainties exists to measure or estimate methane emissions from components or facilities. Currently, only one commercial system is available for quantification of component level emissions and recent reports have highlighted its weaknesses. In order to improve accuracy and increase measurement flexibility, we have designed, developed, and implemented a novel full flow sampling system (FFS) for quantification of methane emissions and greenhouse gases based on transportation emissions measurement principles. The FFS is a modular system that consists of an explosive-proof blower(s), mass airflow sensor(s) (MAF), thermocouple, sample probe, constant volume sampling pump, laser based greenhouse gas sensor, data acquisition device, and analysis software. Dependent upon the blower and hose configuration employed, the current FFS is able to achieve a flow rate ranging from 40 to 1,500 standard cubic feet per minute (SCFM). Utilization of laser-based sensors mitigates interference from higher hydrocarbons (C2+). Co-measurement of water vapor allows for humidity correction. The system is portable, with multiple configurations for a variety of applications ranging from being carried by a person to being mounted in a hand drawn cart, on-road vehicle bed, or from the bed of utility terrain vehicles (UTVs). The FFS is able to quantify methane emission rates with a relative uncertainty of ± 4.4%. The FFS has proven, real world operation for the quantification of methane emissions occurring in conventional and remote facilities. PMID:27341646

  18. Design and Use of a Full Flow Sampling System (FFS) for the Quantification of Methane Emissions.

    PubMed

    Johnson, Derek R; Covington, April N; Clark, Nigel N

    2016-06-12

    The use of natural gas continues to grow with increased discovery and production of unconventional shale resources. At the same time, the natural gas industry faces continued scrutiny for methane emissions from across the supply chain, due to methane's relatively high global warming potential (25-84x that of carbon dioxide, according to the Energy Information Administration). Currently, a variety of techniques of varied uncertainties exists to measure or estimate methane emissions from components or facilities. Currently, only one commercial system is available for quantification of component level emissions and recent reports have highlighted its weaknesses. In order to improve accuracy and increase measurement flexibility, we have designed, developed, and implemented a novel full flow sampling system (FFS) for quantification of methane emissions and greenhouse gases based on transportation emissions measurement principles. The FFS is a modular system that consists of an explosive-proof blower(s), mass airflow sensor(s) (MAF), thermocouple, sample probe, constant volume sampling pump, laser based greenhouse gas sensor, data acquisition device, and analysis software. Dependent upon the blower and hose configuration employed, the current FFS is able to achieve a flow rate ranging from 40 to 1,500 standard cubic feet per minute (SCFM). Utilization of laser-based sensors mitigates interference from higher hydrocarbons (C2+). Co-measurement of water vapor allows for humidity correction. The system is portable, with multiple configurations for a variety of applications ranging from being carried by a person to being mounted in a hand drawn cart, on-road vehicle bed, or from the bed of utility terrain vehicles (UTVs). The FFS is able to quantify methane emission rates with a relative uncertainty of ± 4.4%. The FFS has proven, real world operation for the quantification of methane emissions occurring in conventional and remote facilities.

  19. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.

    PubMed

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-08-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.

  20. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks

    PubMed Central

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-01-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784

  1. Multilevel UQ strategies for large-scale multiphysics applications: PSAAP II solar receiver

    NASA Astrophysics Data System (ADS)

    Jofre, Lluis; Geraci, Gianluca; Iaccarino, Gianluca

    2017-06-01

    Uncertainty quantification (UQ) plays a fundamental part in building confidence in predictive science. Of particular interest is the case of modeling and simulating engineering applications where, due to the inherent complexity, many uncertainties naturally arise, e.g. domain geometry, operating conditions, errors induced by modeling assumptions, etc. In this regard, one of the pacing items, especially in high-fidelity computational fluid dynamics (CFD) simulations, is the large amount of computing resources typically required to propagate incertitude through the models. Upcoming exascale supercomputers will significantly increase the available computational power. However, UQ approaches cannot entrust their applicability only on brute force Monte Carlo (MC) sampling; the large number of uncertainty sources and the presence of nonlinearities in the solution will make straightforward MC analysis unaffordable. Therefore, this work explores the multilevel MC strategy, and its extension to multi-fidelity and time convergence, to accelerate the estimation of the effect of uncertainties. The approach is described in detail, and its performance demonstrated on a radiated turbulent particle-laden flow case relevant to solar energy receivers (PSAAP II: Particle-laden turbulence in a radiation environment). Investigation funded by DoE's NNSA under PSAAP II.

  2. Quantifying uncertainty in measurement of mercury in suspended particulate matter by cold vapor technique using atomic absorption spectrometry with hydride generator.

    PubMed

    Singh, Nahar; Ahuja, Tarushee; Ojha, Vijay Narain; Soni, Daya; Tripathy, S Swarupa; Leito, Ivo

    2013-01-01

    As a result of rapid industrialization several chemical forms of organic and inorganic mercury are constantly introduced to the environment and affect humans and animals directly. All forms of mercury have toxic effects; therefore accurate measurement of mercury is of prime importance especially in suspended particulate matter (SPM) collected through high volume sampler (HVS). In the quantification of mercury in SPM samples several steps are involved from sampling to final result. The quality, reliability and confidence level of the analyzed data depends upon the measurement uncertainty of the whole process. Evaluation of measurement uncertainty of results is one of the requirements of the standard ISO/IEC 17025:2005 (European Standard EN IS/ISO/IEC 17025:2005, issue1:1-28, 2006). In the presented study the uncertainty estimation in mercury determination in suspended particulate matter (SPM) has been carried out using cold vapor Atomic Absorption Spectrometer-Hydride Generator (AAS-HG) technique followed by wet chemical digestion process. For the calculation of uncertainty, we have considered many general potential sources of uncertainty. After the analysis of data of seven diverse sites of Delhi, it has been concluded that the mercury concentration varies from 1.59 ± 0.37 to 14.5 ± 2.9 ng/m(3) with 95% confidence level (k = 2).

  3. Linking trading ratio with TMDL (total maximum daily load) allocation matrix and uncertainty analysis.

    PubMed

    Zhang, H X

    2008-01-01

    An innovative approach for total maximum daily load (TMDL) allocation and implementation is the watershed-based pollutant trading. Given the inherent scientific uncertainty for the tradeoffs between point and nonpoint sources, setting of trading ratios can be a contentious issue and was already listed as an obstacle by several pollutant trading programs. One of the fundamental reasons that a trading ratio is often set higher (e.g. greater than 2) is to allow for uncertainty in the level of control needed to attain water quality standards, and to provide a buffer in case traded reductions are less effective than expected. However, most of the available studies did not provide an approach to explicitly address the determination of trading ratio. Uncertainty analysis has rarely been linked to determination of trading ratio.This paper presents a practical methodology in estimating "equivalent trading ratio (ETR)" and links uncertainty analysis with trading ratio determination from TMDL allocation process. Determination of ETR can provide a preliminary evaluation of "tradeoffs" between various combination of point and nonpoint source control strategies on ambient water quality improvement. A greater portion of NPS load reduction in overall TMDL load reduction generally correlates with greater uncertainty and thus requires greater trading ratio. The rigorous quantification of trading ratio will enhance the scientific basis and thus public perception for more informed decision in overall watershed-based pollutant trading program. (c) IWA Publishing 2008.

  4. Interim reliability-evaluation program: analysis of the Browns Ferry, Unit 1, nuclear plant. Appendix C - sequence quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mays, S.E.; Poloski, J.P.; Sullivan, W.H.

    1982-07-01

    This report describes a risk study of the Browns Ferry, Unit 1, nuclear plant. The study is one of four such studies sponsored by the NRC Office of Research, Division of Risk Assessment, as part of its Interim Reliability Evaluation Program (IREP), Phase II. This report is contained in four volumes: a main report and three appendixes. Appendix C generally describes the methods used to estimate accident sequence frequency values. Information is presented concerning the approach, example collection, failure data, candidate dominant sequences, uncertainty analysis, and sensitivity analysis.

  5. User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Coleman, Kayla; Gilkey, Lindsay N.

    Sandia’s Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility. In its simplest mode, Dakota can automate typical parameter variation studies through a generic interface to a physics-based computational model. This can lend efficiency and rigor to manual parameter perturbation studies already being conducted by analysts. However, Dakota also delivers advanced parametric analysis techniques enabling design exploration, optimization, model calibration, riskmore » analysis, and quantification of margins and uncertainty with such models. It directly supports verification and validation activities. Dakota algorithms enrich complex science and engineering models, enabling an analyst to answer crucial questions of - Sensitivity: Which are the most important input factors or parameters entering the simulation, and how do they influence key outputs?; Uncertainty: What is the uncertainty or variability in simulation output, given uncertainties in input parameters? How safe, reliable, robust, or variable is my system? (Quantification of margins and uncertainty, QMU); Optimization: What parameter values yield the best performing design or operating condition, given constraints? Calibration: What models and/or parameters best match experimental data? In general, Dakota is the Consortium for Advanced Simulation of Light Water Reactors (CASL) delivery vehicle for verification, validation, and uncertainty quantification (VUQ) algorithms. It permits ready application of the VUQ methods described above to simulation codes by CASL researchers, code developers, and application engineers.« less

  6. Comparisons of Wilks’ and Monte Carlo Methods in Response to the 10CFR50.46(c) Proposed Rulemaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hongbin; Szilard, Ronaldo; Zou, Ling

    The Nuclear Regulatory Commission (NRC) is proposing a new rulemaking on emergency core system/loss-of-coolant accident (LOCA) performance analysis. In the proposed rulemaking, designated as 10CFR50.46(c), the US NRC put forward an equivalent cladding oxidation criterion as a function of cladding pre-transient hydrogen content. The proposed rulemaking imposes more restrictive and burnup-dependent cladding embrittlement criteria; consequently nearly all the fuel rods in a reactor core need to be analyzed under LOCA conditions to demonstrate compliance to the safety limits. New analysis methods are required to provide a thorough characterization of the reactor core in order to identify the locations of themore » limiting rods as well as to quantify the safety margins under LOCA conditions. With the new analysis method presented in this work, the limiting transient case and the limiting rods can be easily identified to quantify the safety margins in response to the proposed new rulemaking. In this work, the best-estimate plus uncertainty (BEPU) analysis capability for large break LOCA with the new cladding embrittlement criteria using the RELAP5-3D code is established and demonstrated with a reduced set of uncertainty parameters. Both the direct Monte Carlo method and the Wilks’ nonparametric statistical method can be used to perform uncertainty quantification. Wilks’ method has become the de-facto industry standard to perform uncertainty quantification in BEPU LOCA analyses. Despite its widespread adoption by the industry, the use of small sample sizes to infer statement of compliance to the existing 10CFR50.46 rule, has been a major cause of unrealized operational margin in today’s BEPU methods. Moreover the debate on the proper interpretation of the Wilks’ theorem in the context of safety analyses is not fully resolved yet, even more than two decades after its introduction in the frame of safety analyses in the nuclear industry. This represents both a regulatory and application risk in rolling out new methods. With the 10CFR50.46(c) proposed rulemaking, the deficiencies of the Wilks’ approach are further exacerbated. The direct Monte Carlo approach offers a robust alternative to perform uncertainty quantification within the context of BEPU analyses. In this work, the Monte Carlo method is compared with the Wilks’ method in response to the NRC 10CFR50.46(c) proposed rulemaking.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, C. S.; Zhang, Hongbin

    Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less

  8. Uncertainty Analysis and Order-by-Order Optimization of Chiral Nuclear Interactions

    DOE PAGES

    Carlsson, Boris; Forssen, Christian; Fahlin Strömberg, D.; ...

    2016-02-24

    Chiral effective field theory ( ΧEFT) provides a systematic approach to describe low-energy nuclear forces. Moreover, EFT is able to provide well-founded estimates of statistical and systematic uncertainties | although this unique advantage has not yet been fully exploited. We ll this gap by performing an optimization and statistical analysis of all the low-energy constants (LECs) up to next-to-next-to-leading order. Our optimization protocol corresponds to a simultaneous t to scattering and bound-state observables in the pion-nucleon, nucleon-nucleon, and few-nucleon sectors, thereby utilizing the full model capabilities of EFT. Finally, we study the effect on other observables by demonstrating forward-error-propagation methodsmore » that can easily be adopted by future works. We employ mathematical optimization and implement automatic differentiation to attain e cient and machine-precise first- and second-order derivatives of the objective function with respect to the LECs. This is also vital for the regression analysis. We use power-counting arguments to estimate the systematic uncertainty that is inherent to EFT and we construct chiral interactions at different orders with quantified uncertainties. Statistical error propagation is compared with Monte Carlo sampling showing that statistical errors are in general small compared to systematic ones. In conclusion, we find that a simultaneous t to different sets of data is critical to (i) identify the optimal set of LECs, (ii) capture all relevant correlations, (iii) reduce the statistical uncertainty, and (iv) attain order-by-order convergence in EFT. Furthermore, certain systematic uncertainties in the few-nucleon sector are shown to get substantially magnified in the many-body sector; in particlar when varying the cutoff in the chiral potentials. The methodology and results presented in this Paper open a new frontier for uncertainty quantification in ab initio nuclear theory.« less

  9. Gaussian process surrogates for failure detection: A Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Wang, Hongqiao; Lin, Guang; Li, Jinglai

    2016-05-01

    An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.

  10. Computer Model Inversion and Uncertainty Quantification in the Geosciences

    NASA Astrophysics Data System (ADS)

    White, Jeremy T.

    The subject of this dissertation is use of computer models as data analysis tools in several different geoscience settings, including integrated surface water/groundwater modeling, tephra fallout modeling, geophysical inversion, and hydrothermal groundwater modeling. The dissertation is organized into three chapters, which correspond to three individual publication manuscripts. In the first chapter, a linear framework is developed to identify and estimate the potential predictive consequences of using a simple computer model as a data analysis tool. The framework is applied to a complex integrated surface-water/groundwater numerical model with thousands of parameters. Several types of predictions are evaluated, including particle travel time and surface-water/groundwater exchange volume. The analysis suggests that model simplifications have the potential to corrupt many types of predictions. The implementation of the inversion, including how the objective function is formulated, what minimum of the objective function value is acceptable, and how expert knowledge is enforced on parameters, can greatly influence the manifestation of model simplification. Depending on the prediction, failure to specifically address each of these important issues during inversion is shown to degrade the reliability of some predictions. In some instances, inversion is shown to increase, rather than decrease, the uncertainty of a prediction, which defeats the purpose of using a model as a data analysis tool. In the second chapter, an efficient inversion and uncertainty quantification approach is applied to a computer model of volcanic tephra transport and deposition. The computer model simulates many physical processes related to tephra transport and fallout. The utility of the approach is demonstrated for two eruption events. In both cases, the importance of uncertainty quantification is highlighted by exposing the variability in the conditioning provided by the observations used for inversion. The worth of different types of tephra data to reduce parameter uncertainty is evaluated, as is the importance of different observation error models. The analyses reveal the importance using tephra granulometry data for inversion, which results in reduced uncertainty for most eruption parameters. In the third chapter, geophysical inversion is combined with hydrothermal modeling to evaluate the enthalpy of an undeveloped geothermal resource in a pull-apart basin located in southeastern Armenia. A high-dimensional gravity inversion is used to define the depth to the contact between the lower-density valley fill sediments and the higher-density surrounding host rock. The inverted basin depth distribution was used to define the hydrostratigraphy for the coupled groundwater-flow and heat-transport model that simulates the circulation of hydrothermal fluids in the system. Evaluation of several different geothermal system configurations indicates that the most likely system configuration is a low-enthalpy, liquid-dominated geothermal system.

  11. Methods for regionalization of impacts of non-toxic air pollutants in life-cycle assessments often tell a consistent story

    NASA Astrophysics Data System (ADS)

    Djomo, S. Njakou; Knudsen, M. T.; Andersen, M. S.; Hermansen, J. E.

    2017-11-01

    There is an ongoing debate regarding the influence of the source location of pollution on the fate of pollutants and their subsequent impacts. Several methods have been developed to derive site-dependent characterization factors (CFs) for use in life-cycle assessment (LCA). Consistent, precise, and accurate estimates of CFs are crucial for establishing long-term, sustainable air pollution abatement policies. We reviewed currently available studies on the regionalization of non-toxic air pollutants in LCA. We also extracted and converted data into indices for analysis. We showed that CFs can distinguish between emissions occurring in different locations, and that the different methods used to derive CFs map locations consistently from very sensitive to less sensitive. Seasonal variations are less important for the computation of CFs for acidification and eutrophication, but they are relevant for the calculation of CFs for tropospheric ozone formation. Large intra-country differences in estimated CFs suggest that an abatement policy relying on quantitative estimates based upon a single method may have undesirable outcomes. Within country differences in estimates of CFs for acidification and eutrophication are the results of the models used, category definitions, soil sensitivity factors, background emission concentration, critical loads database, and input data. Striking features in these studies were the lack of CFs for countries outside Europe, the USA, Japan, and Canada, the lack of quantification of uncertainties. Parameter and input data uncertainties are well quantified, but the uncertainty associated with the choice of category indicator is rarely quantified and this can be significant. Although CFs are scientifically robust, further refinements are needed before they can be integrated in LCA. Future research should include uncertainty analyses, and should develop a consensus model for CFs. CFs for countries outside Europe, Japan, Canada and the USA are urgently needed.

  12. Improving the quantification of flash flood hydrographs and reducing their uncertainty using noncontact streamgauging methods

    NASA Astrophysics Data System (ADS)

    Branger, Flora; Dramais, Guillaume; Horner, Ivan; Le Boursicaud, Raphaël; Le Coz, Jérôme; Renard, Benjamin

    2015-04-01

    Continuous river discharge data are crucial for the study and management of floods. In most river discharge monitoring networks, these data are obtained at gauging stations, where the stage-discharge relation is modelled with a rating curve to derive discharge from the measurement of water level in the river. Rating curves are usually established using individual ratings (or gaugings). However, using traditional gauging methods during flash floods is challenging for many reasons including hazardous flow conditions (for both equipment and people), short duration of the flood events, transient flows during the time needed to perform the gauging, etc. The lack of gaugings implies that the rating curve is often extrapolated well beyond the gauged range for the highest floods, inducing large uncertainties in the computed discharges. We deployed two remote techniques for gauging floods and improving stage-discharge relations for high flow conditions at several hydrometric stations throughout the Ardèche river catchment in France : (1) permanent video-recording stations enabling the implementation of the image analysis LS-PIV technique (Large Scale Particle Image Velocimetry) ; (2) and mobile gaugings using handheld Surface Velocity Radars (SVR). These gaugings were used to estimate the rating curve and its uncertainty using the Bayesian method BaRatin (Le Coz et al., 2014). Importantly, this method explicitly accounts for the uncertainty of individual gaugings, which is especially relevant for remote gaugings since their uncertainty is generally much higher than that of standard intrusive gauging methods. Then, the uncertainty of streamflow records was derived by combining the uncertainty of the rating curve and the uncertainty of stage records. We assessed the impact of these methodological developments for peak flow estimation and for flood descriptors at various time steps. The combination of field measurement innovation and statistical developments allows efficiently quantifying and reducing the uncertainties of flood peak estimates and flood descriptors at gauging stations. The noncontact streamgauging techniques used in our field campaign strategy have complementary interests. Permanent LSPIV stations, once installed and calibrated, can monitor floods automatically and perform many gaugings during a single event, thus documenting the rise, peak and recession of floods. SVR gaugings are more "one shot" gaugings but can be deployed quickly and at minimal cost over a large territory. Both of these noncontact techniques contribute to a significant reduction of uncertainty on peak hydrographs and flood descriptors at different time steps for a given catchment. Le Coz, J.; Renard, B.; Bonnifait, L.; Branger, F. & Le Boursicaud, R. (2014), 'Combining hydraulic knowledge and uncertain gaugings in the estimation of hydrometric rating curves: A Bayesian approach', Journal of Hydrology 509, 573-587.

  13. Surrogate gas prediction model as a proxy for Δ14C-based measurements of fossil fuel-CO2.

    PubMed

    Coakley, Kevin J; Miller, John B; Montzka, Stephen A; Sweeney, Colm; Miller, Ben R

    2016-06-27

    The measured 14 C: 12 C isotopic ratio of atmospheric CO 2 (and its associated derived Δ 14 C value) is an ideal tracer for determination of the fossil fuel derived CO 2 enhancement contributing to any atmospheric CO 2 measurement ( C ff ). Given enough such measurements, independent top-down estimation of US fossil fuel-CO 2 emissions should be possible. However, the number of Δ 14 C measurements is presently constrained by cost, available sample volume, and availability of mass spectrometer measurement facilities. Δ 14 C is therefore measured in just a small fraction of samples obtained by ask air sampling networks around the world. Here, we develop a Projection Pursuit Regression (PPR) model to predict C ff as a function of multiple surrogate gases acquired within the NOAA/ESRL Global Greenhouse Gas Reference Network (GGGRN). The surrogates consist of measured enhancements of various anthropogenic trace gases, including CO, SF 6 , and halo- and hydrocarbons acquired in vertical airborne sampling profiles near Cape May, NJ and Portsmouth, NH from 2005 through 2010. Model performance for these sites is quantified based on predicted values corresponding to test data excluded from the model building process. Chi-square hypothesis test analysis indicates that these predictions and corresponding observations are consistent given our uncertainty budget which accounts for random effects and one particular systematic effect. However, quantification of the combined uncertainty of the prediction due to all relevant systematic effects is difficult because of the limited range of the observations and their relatively high fractional uncertainties at the sampling sites considered here. To account for the possibility of additional systematic effects, we incorporate another component of uncertainty into our budget. Expanding the number of Δ 14 C measurements in the NOAA GGGRN and building new PPR models at additional sites would improve our understanding of uncertainties and potentially increase the number of C ff estimates by approximately a factor of three. Provided that these estimates are of comparable quality to Δ 14 C-based estimates, we expect an improved determination of fossil fuel-CO 2 emissions.

  14. Assessing the Risks to Human Health in Heterogeneous Aquifers under Uncertainty

    NASA Astrophysics Data System (ADS)

    de Barros, Felipe

    2015-04-01

    Reliable quantification of human health risk from toxic chemicals present in groundwater is a challenging task. The main difficulty relies on the fact that many of the components that constitute human health risk assessment are uncertain and requires interdisciplinary knowledge. Understanding the impact from each of these components in risk estimation can provide guidance for decision makers to manage contaminated sites and best allocate resources towards minimal prediction uncertainty. This presentation will focus on the impact of aquifer heterogeneity in human health risk. Spatial heterogeneity of the hydrogeological properties can lead to the formation of preferential flow channels which control the plume spreading rates and travel time statistics, both which are critical in assessing the risk level. By making use of an integrated hydrogeological-health stochastic framework, the significance of characteristic length scales (e.g. characterizing flow, transport and sampling devices) in both controlling the uncertainty of health risk and determining data needs is highlighted. Through a series of examples, we show how fundamental knowledge on the main physical mechanisms affecting solute pathways are necessary to understand the human health response to varying drivers.

  15. Enhancing sparsity of Hermite polynomial expansions by iterative rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Baker, Nathan A.

    2016-02-01

    Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.

  16. Molecular nonlinear dynamics and protein thermal uncertainty quantification

    PubMed Central

    Xia, Kelin; Wei, Guo-Wei

    2014-01-01

    This work introduces molecular nonlinear dynamics (MND) as a new approach for describing protein folding and aggregation. By using a mode system, we show that the MND of disordered proteins is chaotic while that of folded proteins exhibits intrinsically low dimensional manifolds (ILDMs). The stability of ILDMs is found to strongly correlate with protein energies. We propose a novel method for protein thermal uncertainty quantification based on persistently invariant ILDMs. Extensive comparison with experimental data and the state-of-the-art methods in the field validate the proposed new method for protein B-factor prediction. PMID:24697365

  17. Validation of Ion Chromatographic Method for Determination of Standard Inorganic Anions in Treated and Untreated Drinking Water

    NASA Astrophysics Data System (ADS)

    Ivanova, V.; Surleva, A.; Koleva, B.

    2018-06-01

    An ion chromatographic method for determination of fluoride, chloride, nitrate and sulphate in untreated and treated drinking waters was described. An automated 850 IC Professional, Metrohm system equipped with conductivity detector and Metrosep A Supp 7-250 (250 x 4 mm) column was used. The validation of the method was performed for simultaneous determination of all studied analytes and the results have showed that the validated method fits the requirements of the current water legislation. The main analytical characteristics were estimated for each of studied analytes: limits of detection, limits of quantification, working and linear ranges, repeatability and intermediate precision, recovery. The trueness of the method was estimated by analysis of certified reference material for soft drinking water. Recovery test was performed on spiked drinking water samples. An uncertainty was estimated. The method was applied for analysis of drinking waters before and after chlorination.

  18. Quantification of CO emissions from the city of Madrid using MOPITT satellite retrievals and WRF simulations

    NASA Astrophysics Data System (ADS)

    Dekker, Iris N.; Houweling, Sander; Aben, Ilse; Röckmann, Thomas; Krol, Maarten; Martínez-Alonso, Sara; Deeter, Merritt N.; Worden, Helen M.

    2017-12-01

    The growth of mega-cities leads to air quality problems directly affecting the citizens. Satellite measurements are becoming of higher quality and quantity, which leads to more accurate satellite retrievals of enhanced air pollutant concentrations over large cities. In this paper, we compare and discuss both an existing and a new method for estimating urban-scale trends in CO emissions using multi-year retrievals from the MOPITT satellite instrument. The first method is mainly based on satellite data, and has the advantage of fewer assumptions, but also comes with uncertainties and limitations as shown in this paper. To improve the reliability of urban-to-regional scale emission trend estimation, we simulate MOPITT retrievals using the Weather Research and Forecast model with chemistry core (WRF-Chem). The difference between model and retrieval is used to optimize CO emissions in WRF-Chem, focusing on the city of Madrid, Spain. This method has the advantage over the existing method in that it allows both a trend analysis of CO concentrations and a quantification of CO emissions. Our analysis confirms that MOPITT is capable of detecting CO enhancements over Madrid, although significant differences remain between the yearly averaged model output and satellite measurements (R2 = 0.75) over the city. After optimization, we find Madrid CO emissions to be lower by 48 % for 2002 and by 17 % for 2006 compared with the EdgarV4.2 emission inventory. The MOPITT-derived emission adjustments lead to better agreement with the European emission inventory TNO-MAC-III for both years. This suggests that the downward trend in CO emissions over Madrid is overestimated in EdgarV4.2 and more realistically represented in TNO-MACC-III. However, our satellite and model based emission estimates have large uncertainties, around 20 % for 2002 and 50 % for 2006.

  19. Inverse regression-based uncertainty quantification algorithms for high-dimensional models: Theory and practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan; Lin, Guang; Li, Bing

    2016-09-01

    A well-known challenge in uncertainty quantification (UQ) is the "curse of dimensionality". However, many high-dimensional UQ problems are essentially low-dimensional, because the randomness of the quantity of interest (QoI) is caused only by uncertain parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace. Motivated by this observation, we propose and demonstrate in this paper an inverse regression-based UQ approach (IRUQ) for high-dimensional problems. Specifically, we use an inverse regression procedure to estimate the SDR subspace and then convert the original problem to a low-dimensional one, which can be efficiently solved by building a response surface model such as a polynomial chaos expansion. The novelty and advantages of the proposed approach is seen in its computational efficiency and practicality. Comparing with Monte Carlo, the traditionally preferred approach for high-dimensional UQ, IRUQ with a comparable cost generally gives much more accurate solutions even for high-dimensional problems, and even when the dimension reduction is not exactly sufficient. Theoretically, IRUQ is proved to converge twice as fast as the approach it uses seeking the SDR subspace. For example, while a sliced inverse regression method converges to the SDR subspace at the rate ofmore » $$O(n^{-1/2})$$, the corresponding IRUQ converges at $$O(n^{-1})$$. IRUQ also provides several desired conveniences in practice. It is non-intrusive, requiring only a simulator to generate realizations of the QoI, and there is no need to compute the high-dimensional gradient of the QoI. Finally, error bars can be derived for the estimation results reported by IRUQ.« less

  20. A two-stage adaptive stochastic collocation method on nested sparse grids for multiphase flow in randomly heterogeneous porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Qinzhuo, E-mail: liaoqz@pku.edu.cn; Zhang, Dongxiao; Tchelepi, Hamdi

    A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod–Patterson–Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiencymore » of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.« less

  1. Developing population models with data from marked individuals

    USGS Publications Warehouse

    Hae Yeong Ryu,; Kevin T. Shoemaker,; Eva Kneip,; Anna Pidgeon,; Patricia Heglund,; Brooke Bateman,; Thogmartin, Wayne E.; Reşit Akçakaya,

    2016-01-01

    Population viability analysis (PVA) is a powerful tool for biodiversity assessments, but its use has been limited because of the requirements for fully specified population models such as demographic structure, density-dependence, environmental stochasticity, and specification of uncertainties. Developing a fully specified population model from commonly available data sources – notably, mark–recapture studies – remains complicated due to lack of practical methods for estimating fecundity, true survival (as opposed to apparent survival), natural temporal variability in both survival and fecundity, density-dependence in the demographic parameters, and uncertainty in model parameters. We present a general method that estimates all the key parameters required to specify a stochastic, matrix-based population model, constructed using a long-term mark–recapture dataset. Unlike standard mark–recapture analyses, our approach provides estimates of true survival rates and fecundities, their respective natural temporal variabilities, and density-dependence functions, making it possible to construct a population model for long-term projection of population dynamics. Furthermore, our method includes a formal quantification of parameter uncertainty for global (multivariate) sensitivity analysis. We apply this approach to 9 bird species and demonstrate the feasibility of using data from the Monitoring Avian Productivity and Survivorship (MAPS) program. Bias-correction factors for raw estimates of survival and fecundity derived from mark–recapture data (apparent survival and juvenile:adult ratio, respectively) were non-negligible, and corrected parameters were generally more biologically reasonable than their uncorrected counterparts. Our method allows the development of fully specified stochastic population models using a single, widely available data source, substantially reducing the barriers that have until now limited the widespread application of PVA. This method is expected to greatly enhance our understanding of the processes underlying population dynamics and our ability to analyze viability and project trends for species of conservation concern.

  2. The U/Th production ratio and the age of the Milky Way from meteorites and Galactic halo stars.

    PubMed

    Dauphas, Nicolas

    2005-06-30

    Some heavy elements (with atomic number A > 69) are produced by the 'rapid' (r)-process of nucleosynthesis, where lighter elements are bombarded with a massive flux of neutrons. Although this is characteristic of supernovae and neutron star mergers, uncertainties in where the r-process occurs persist because stellar models are too crude to allow precise quantification of this phenomenon. As a result, there are many uncertainties and assumptions in the models used to calculate the production ratios of actinides (like uranium-238 and thorium-232). Current estimates of the U/Th production ratio range from approximately 0.4 to 0.7. Here I show that the U/Th abundance ratio in meteorites can be used, in conjunction with observations of low-metallicity stars in the halo of the Milky Way, to determine the U/Th production ratio very precisely (0.57(+0.037)(-0.031). This value can be used in future studies to constrain the possible nuclear mass formulae used in r-process calculations, to help determine the source of Galactic cosmic rays, and to date circumstellar grains. I also estimate the age of the Milky Way (14.5(+2.8)(-2.2)Gyr in a way that is independent of the uncertainties associated with fluctuations in the microwave background or models of stellar evolution.

  3. On the Utilization of Ice Flow Models and Uncertainty Quantification to Interpret the Impact of Surface Radiation Budget Errors on Estimates of Greenland Ice Sheet Surface Mass Balance and Regional Estimates of Mass Balance

    NASA Astrophysics Data System (ADS)

    Schlegel, N.; Larour, E. Y.; Gardner, A. S.; Lang, C.; Miller, C. E.; van den Broeke, M. R.

    2016-12-01

    How Greenland ice flow may respond to future increases in surface runoff and to increases in the frequency of extreme melt events is unclear, as it requires detailed comprehension of Greenland surface climate and the ice sheet's sensitivity to associated uncertainties. With established uncertainty quantification tools run within the framework of Ice Sheet System Model (ISSM), we conduct decadal-scale forward modeling experiments to 1) quantify the spatial resolution needed to effectively force distinct components of the surface radiation budget, and subsequently surface mass balance (SMB), in various regions of the ice sheet and 2) determine the dynamic response of Greenland ice flow to variations in components of the net radiation budget. The Glacier Energy and Mass Balance (GEMB) software is a column surface model (1-D) that has recently been embedded as a module within ISSM. Using the ISSM-GEMB framework, we perform sensitivity analyses to determine how perturbations in various components of the surface radiation budget affect model output; these model experiments allow us predict where and on what spatial scale the ice sheet is likely to dynamically respond to changes in these parameters. Preliminary results suggest that SMB should be forced at at least a resolution of 23 km to properly capture dynamic ice response. In addition, Monte-Carlo style sampling analyses reveals that the areas with the largest uncertainty in mass flux are located near the equilibrium line altitude (ELA), upstream of major outlet glaciers in the North and West of the ice sheet. Sensitivity analysis indicates that these areas are also the most vulnerable on the ice sheet to persistent, far-field shifts in SMB, suggesting that continued warming, and upstream shift in the ELA, are likely to result in increased velocities, and consequentially SMB-induced thinning upstream of major outlet glaciers. Here, we extend our investigation to consider various components of the surface radiation budget separately, in order to determine how and where errors in these fields may independently impact ice flow. This work was performed at the California Institute of Technology's Jet Propulsion Laboratory under a contract with the National Aeronautics and Space Administration's Cryosphere and Interdisciplinary Research in Earth Science Programs.

  4. Uncertainty Quantification For Physical and Numerical Diffusion Models In Inertial Confinement Fusion Simulations

    NASA Astrophysics Data System (ADS)

    Rana, Verinder S.

    This thesis concerns simulations of Inertial Confinement Fusion. Inertial confinement is carried out in a large scale facility at National Ignition Facility. The experiments have failed to reproduce design calculations, and so uncertainty quantification of calculations is an important asset. Uncertainties can be classified as aleatoric or epistemic. This thesis is concerned with aleatoric uncertainty quantification. Among the many uncertain aspects that affect the simulations, we have narrowed our study of possible uncertainties. The first source of uncertainty we present is the amount of pre-heating of the fuel done by hot electrons. The second source of uncertainty we consider is the effect of the algorithmic and physical transport diffusion and their effect on the hot spot thermodynamics. Physical transport mechanisms play an important role for the entire duration of the ICF capsule, so modeling them correctly becomes extremely vital. In addition, codes that simulate material mixing introduce numerical (algorithmically) generated transport across the material interfaces. This adds another layer of uncertainty in the solution through the artificially added diffusion. The third source of uncertainty we consider is physical model uncertainty. The fourth source of uncertainty we focus on a single localized surface perturbation (a divot) which creates a perturbation to the solution that can potentially enter the hot spot to diminish the thermonuclear environment. Jets of ablator material are hypothesized to enter the hot spot and cool the core, contributing to the observed lower reactions than predicted levels. A plasma transport package, Transport for Inertial Confinement Fusion (TICF) has been implemented into the Radiation Hydrodynamics code FLASH, from the University of Chicago. TICF has thermal, viscous and mass diffusion models that span the entire ICF implosion regime. We introduced a Quantum Molecular Dynamics calibrated thermal conduction model due to Hu for thermal transport. The numerical approximation uncertainties are introduced by the choice of a hydrodynamic solver for a particular flow. Solvers tend to be diffusive at material interfaces and the Front Tracking (FT) algorithm, which is an already available software code in the form of an API, helps to ameliorate such effects. The FT algorithm has also been implemented in FLASH and we use this to study the effect that divots can have on the hot spot properties.

  5. Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.

    2016-12-01

    With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.

  6. Detectability and Interpretational Uncertainties: Considerations in Gauging the Impacts of Land Disturbance on Streamflow

    EPA Science Inventory

    Hydrologic impacts of land disturbance and management can be confounded by rainfall variability. As a consequence, attempts to gauge and quantify these effects through streamflow monitoring are typically subject to uncertainties. This paper addresses the quantification and deline...

  7. Performance of uncertainty quantification methodologies and linear solvers in cardiovascular simulations

    NASA Astrophysics Data System (ADS)

    Seo, Jongmin; Schiavazzi, Daniele; Marsden, Alison

    2017-11-01

    Cardiovascular simulations are increasingly used in clinical decision making, surgical planning, and disease diagnostics. Patient-specific modeling and simulation typically proceeds through a pipeline from anatomic model construction using medical image data to blood flow simulation and analysis. To provide confidence intervals on simulation predictions, we use an uncertainty quantification (UQ) framework to analyze the effects of numerous uncertainties that stem from clinical data acquisition, modeling, material properties, and boundary condition selection. However, UQ poses a computational challenge requiring multiple evaluations of the Navier-Stokes equations in complex 3-D models. To achieve efficiency in UQ problems with many function evaluations, we implement and compare a range of iterative linear solver and preconditioning techniques in our flow solver. We then discuss applications to patient-specific cardiovascular simulation and how the problem/boundary condition formulation in the solver affects the selection of the most efficient linear solver. Finally, we discuss performance improvements in the context of uncertainty propagation. Support from National Institute of Health (R01 EB018302) is greatly appreciated.

  8. Constellation Program Lessons Learned in the Quantification and Use of Aerodynamic Uncertainty

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Hemsch, Michael J.; Pinier, Jeremy T.; Bibb, Karen L.; Chan, David T.; Hanke, Jeremy L.

    2011-01-01

    The NASA Constellation Program has worked for the past five years to develop a re- placement for the current Space Transportation System. Of the elements that form the Constellation Program, only two require databases that define aerodynamic environments and their respective uncertainty: the Ares launch vehicles and the Orion crew and launch abort vehicles. Teams were established within the Ares and Orion projects to provide repre- sentative aerodynamic models including both baseline values and quantified uncertainties. A technical team was also formed within the Constellation Program to facilitate integra- tion among the project elements. This paper is a summary of the collective experience of the three teams working with the quantification and use of uncertainty in aerodynamic environments: the Ares and Orion project teams as well as the Constellation integration team. Not all of the lessons learned discussed in this paper could be applied during the course of the program, but they are included in the hope of benefiting future projects.

  9. Measurements of VOC fluxes by Eddy-covariance with a PTR-Qi-TOF-MS over a mature wheat crop near Paris: Evaluation of data quality and uncertainties.

    NASA Astrophysics Data System (ADS)

    Buysse, Pauline; Loubet, Benjamin; Ciuraru, Raluca; Lafouge, Florence; Zurfluh, Olivier; Gonzaga-Gomez, Lais; Fanucci, Olivier; Gueudet, Jean-Christophe; Decuq, Céline; Gros, Valérie; Sarda, Roland; Zannoni, Nora

    2017-04-01

    The quantification of volatile organic compounds (VOC) fluxes exchanged by terrestrial ecosystems is of large interest because of their influence on the chemistry and composition of the atmosphere including aerosols and oxidants. Latest developments in the techniques for detecting, identifying and measuring VOC fluxes have considerably improved the abilities to get reliable estimates. Among these, the eddy-covariance (EC) methodology constitutes the most direct approach, and relies on both well-established principles (Aubinet et al. 2000) and a sound continuously worldwide improving experience. The combination of the EC methodology with the latest proton-transfer-reaction mass spectrometer (PTR-MS) device, the PTR-Qi-TOF-MS, which allows the identification and quantification of more than 500 VOC at high frequency, now provides a very powerful and precise tool for an accurate quantification of VOC fluxes on various types of terrestrial ecosystems. The complexity of the whole methodology however demands that several data quality requirements are fulfilled. VOC fluxes were measured by EC with a PTR-Qi-TOF-MS (national instrument within the ANAEE-France framework) for one month and a half over a mature wheat crop near Paris (FR-GRI ICOS site). Most important emissions (by descending order) were observed from detected compounds with mass-over-charge (m/z) ratios of 33.033 (methanol), 45.033 (acetaldehyde), 93.033 (not identified yet), 59.049 (acetone), and 63.026 (dimethyl sulfide or DMS). Emissions from higher-mass compounds, which might be due to pesticide applications at the beginning of our observation period, were also detected. Some compounds were also seen to deposit (e.g. m/z 47.013, 71.085, 75.044, 83.05) while others exhibited bidirectional fluxes (e.g. m/z 57.07, 69.07). Before analyzing VOC flux responses to meteorological and crop development drivers, a data quality check was performed which included (i) uncertainty analysis of mass and concentration calibration, (ii) determination of fragmentation patterns and (iii) of lag time high-frequency losses for all ions that showed a flux, and (iv) the determination of the flux random uncertainties and of the limit of detection.

  10. Bias analysis applied to Agricultural Health Study publications to estimate non-random sources of uncertainty.

    PubMed

    Lash, Timothy L

    2007-11-26

    The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a qualitative description of study limitations. The latter approach is likely to lead to overconfidence regarding the potential for causal associations, whereas the former safeguards against such overinterpretations. Furthermore, such analyses, once programmed, allow rapid implementation of alternative assignments of probability distributions to the bias parameters, so elevate the plane of discussion regarding study bias from characterizing studies as "valid" or "invalid" to a critical and quantitative discussion of sources of uncertainty.

  11. A statistical kinematic source inversion approach based on the QUESO library for uncertainty quantification and prediction

    NASA Astrophysics Data System (ADS)

    Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo

    2014-05-01

    Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.

  12. Study the effect of reservoir spatial heterogeneity on CO2 sequestration under an uncertainty quantification (UQ) software framework

    NASA Astrophysics Data System (ADS)

    Fang, Y.; Hou, J.; Engel, D.; Lin, G.; Yin, J.; Han, B.; Fang, Z.; Fountoulakis, V.

    2011-12-01

    In this study, we introduce an uncertainty quantification (UQ) software framework for carbon sequestration, with the focus of studying being the effect of spatial heterogeneity of reservoir properties on CO2 migration. We use a sequential Gaussian method (SGSIM) to generate realizations of permeability fields with various spatial statistical attributes. To deal with the computational difficulties, we integrate the following ideas/approaches: 1) firstly, we use three different sampling approaches (probabilistic collocation, quasi-Monte Carlo, and adaptive sampling approaches) to reduce the required forward calculations while trying to explore the parameter space and quantify the input uncertainty; 2) secondly, we use eSTOMP as the forward modeling simulator. eSTOMP is implemented using the Global Arrays toolkit (GA) that is based on one-sided inter-processor communication and supports a shared memory programming style on distributed memory platforms. It provides highly-scalable performance. It uses a data model to partition most of the large scale data structures into a relatively small number of distinct classes. The lower level simulator infrastructure (e.g. meshing support, associated data structures, and data mapping to processors) is separated from the higher level physics and chemistry algorithmic routines using a grid component interface; and 3) besides the faster model and more efficient algorithms to speed up the forward calculation, we built an adaptive system infrastructure to select the best possible data transfer mechanisms, to optimally allocate system resources to improve performance, and to integrate software packages and data for composing carbon sequestration simulation, computation, analysis, estimation and visualization. We will demonstrate the framework with a given CO2 injection scenario in a heterogeneous sandstone reservoir.

  13. Inverse regression-based uncertainty quantification algorithms for high-dimensional models: Theory and practice

    NASA Astrophysics Data System (ADS)

    Li, Weixuan; Lin, Guang; Li, Bing

    2016-09-01

    Many uncertainty quantification (UQ) approaches suffer from the curse of dimensionality, that is, their computational costs become intractable for problems involving a large number of uncertainty parameters. In these situations, the classic Monte Carlo often remains the preferred method of choice because its convergence rate O (n - 1 / 2), where n is the required number of model simulations, does not depend on the dimension of the problem. However, many high-dimensional UQ problems are intrinsically low-dimensional, because the variation of the quantity of interest (QoI) is often caused by only a few latent parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace in the statistics literature. Motivated by this observation, we propose two inverse regression-based UQ algorithms (IRUQ) for high-dimensional problems. Both algorithms use inverse regression to convert the original high-dimensional problem to a low-dimensional one, which is then efficiently solved by building a response surface for the reduced model, for example via the polynomial chaos expansion. The first algorithm, which is for the situations where an exact SDR subspace exists, is proved to converge at rate O (n-1), hence much faster than MC. The second algorithm, which doesn't require an exact SDR, employs the reduced model as a control variate to reduce the error of the MC estimate. The accuracy gain could still be significant, depending on how well the reduced model approximates the original high-dimensional one. IRUQ also provides several additional practical advantages: it is non-intrusive; it does not require computing the high-dimensional gradient of the QoI; and it reports an error bar so the user knows how reliable the result is.

  14. Uncertainty quantification in flux balance analysis of spatially lumped and distributed models of neuron-astrocyte metabolism.

    PubMed

    Calvetti, Daniela; Cheng, Yougan; Somersalo, Erkki

    2016-12-01

    Identifying feasible steady state solutions of a brain energy metabolism model is an inverse problem that allows infinitely many solutions. The characterization of the non-uniqueness, or the uncertainty quantification of the flux balance analysis, is tantamount to identifying the degrees of freedom of the solution. The degrees of freedom of multi-compartment mathematical models for energy metabolism of a neuron-astrocyte complex may offer a key to understand the different ways in which the energetic needs of the brain are met. In this paper we study the uncertainty in the solution, using techniques of linear algebra to identify the degrees of freedom in a lumped model, and Markov chain Monte Carlo methods in its extension to a spatially distributed case. The interpretation of the degrees of freedom in metabolic terms, more specifically, glucose and oxygen partitioning, is then leveraged to derive constraints on the free parameters to guarantee that the model is energetically feasible. We demonstrate how the model can be used to estimate the stoichiometric energy needs of the cells as well as the household energy based on the measured oxidative cerebral metabolic rate of glucose and glutamate cycling. Moreover, our analysis shows that in the lumped model the net direction of lactate dehydrogenase (LDH) in the cells can be deduced from the glucose partitioning between the compartments. The extension of the lumped model to a spatially distributed multi-compartment setting that includes diffusion fluxes from capillary to tissue increases the number of degrees of freedom, requiring the use of statistical sampling techniques. The analysis of the distributed model reveals that some of the conclusions valid for the spatially lumped model, e.g., concerning the LDH activity and glucose partitioning, may no longer hold.

  15. Quantification of sewer system infiltration using delta(18)O hydrograph separation.

    PubMed

    Prigiobbe, V; Giulianelli, M

    2009-01-01

    The infiltration of parasitical water into two sewer systems in Rome (Italy) was quantified during a dry weather period. Infiltration was estimated using the hydrograph separation method with two water components and delta(18)O as a conservative tracer. The two water components were groundwater, the possible source of parasitical water within the sewer, and drinking water discharged into the sewer system. This method was applied at an urban catchment scale in order to test the effective water-tightness of two different sewer networks. The sampling strategy was based on an uncertainty analysis and the errors have been propagated using Monte Carlo random sampling. Our field applications showed that the method can be applied easily and quickly, but the error in the estimated infiltration rate can be up to 20%. The estimated infiltration into the recent sewer in Torraccia is 14% and can be considered negligible given the precision of the method, while the old sewer in Infernetto has an estimated infiltration of 50%.

  16. An improved state-parameter analysis of ecosystem models using data assimilation

    USGS Publications Warehouse

    Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.

    2008-01-01

    Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.

  17. A bottom-up approach in estimating the measurement uncertainty and other important considerations for quantitative analyses in drug testing for horses.

    PubMed

    Leung, Gary N W; Ho, Emmie N M; Kwok, W Him; Leung, David K K; Tang, Francis P W; Wan, Terence S M; Wong, April S Y; Wong, Colton H F; Wong, Jenny K Y; Yu, Nola H

    2007-09-07

    Quantitative determination, particularly for threshold substances in biological samples, is much more demanding than qualitative identification. A proper assessment of any quantitative determination is the measurement uncertainty (MU) associated with the determined value. The International Standard ISO/IEC 17025, "General requirements for the competence of testing and calibration laboratories", has more prescriptive requirements on the MU than its superseded document, ISO/IEC Guide 25. Under the 2005 or 1999 versions of the new standard, an estimation of the MU is mandatory for all quantitative determinations. To comply with the new requirement, a protocol was established in the authors' laboratory in 2001. The protocol has since evolved based on our practical experience, and a refined version was adopted in 2004. This paper describes our approach in establishing the MU, as well as some other important considerations, for the quantification of threshold substances in biological samples as applied in the area of doping control for horses. The testing of threshold substances can be viewed as a compliance test (or testing to a specified limit). As such, it should only be necessary to establish the MU at the threshold level. The steps in a "Bottom-Up" approach adopted by us are similar to those described in the EURACHEM/CITAC guide, "Quantifying Uncertainty in Analytical Measurement". They involve first specifying the measurand, including the relationship between the measurand and the input quantities upon which it depends. This is followed by identifying all applicable uncertainty contributions using a "cause and effect" diagram. The magnitude of each uncertainty component is then calculated and converted to a standard uncertainty. A recovery study is also conducted to determine if the method bias is significant and whether a recovery (or correction) factor needs to be applied. All standard uncertainties with values greater than 30% of the largest one are then used to derive the combined standard uncertainty. Finally, an expanded uncertainty is calculated at 99% one-tailed confidence level by multiplying the standard uncertainty with an appropriate coverage factor (k). A sample is considered positive if the determined concentration of the threshold substance exceeds its threshold by the expanded uncertainty. In addition, other important considerations, which can have a significant impact on quantitative analyses, will be presented.

  18. Probabilistic framework for the estimation of the adult and child toxicokinetic intraspecies uncertainty factors.

    PubMed

    Pelekis, Michael; Nicolich, Mark J; Gauthier, Joseph S

    2003-12-01

    Human health risk assessments use point values to develop risk estimates and thus impart a deterministic character to risk, which, by definition, is a probability phenomenon. The risk estimates are calculated based on individuals and then, using uncertainty factors (UFs), are extrapolated to the population that is characterized by variability. Regulatory agencies have recommended the quantification of the impact of variability in risk assessments through the application of probabilistic methods. In the present study, a framework that deals with the quantitative analysis of uncertainty (U) and variability (V) in target tissue dose in the population was developed by applying probabilistic analysis to physiologically-based toxicokinetic models. The mechanistic parameters that determine kinetics were described with probability density functions (PDFs). Since each PDF depicts the frequency of occurrence of all expected values of each parameter in the population, the combined effects of multiple sources of U/V were accounted for in the estimated distribution of tissue dose in the population, and a unified (adult and child) intraspecies toxicokinetic uncertainty factor UFH-TK was determined. The results show that the proposed framework accounts effectively for U/V in population toxicokinetics. The ratio of the 95th percentile to the 50th percentile of the annual average concentration of the chemical at the target tissue organ (i.e., the UFH-TK) varies with age. The ratio is equivalent to a unified intraspecies toxicokinetic UF, and it is one of the UFs by which the NOAEL can be divided to obtain the RfC/RfD. The 10-fold intraspecies UF is intended to account for uncertainty and variability in toxicokinetics (3.2x) and toxicodynamics (3.2x). This article deals exclusively with toxicokinetic component of UF. The framework provides an alternative to the default methodology and is advantageous in that the evaluation of toxicokinetic variability is based on the distribution of the effective target tissue dose, rather than applied dose. It allows for the replacement of the default adult and children intraspecies UF with toxicokinetic data-derived values and provides accurate chemical-specific estimates for their magnitude. It shows that proper application of probability and toxicokinetic theories can reduce uncertainties when establishing exposure limits for specific compounds and provide better assurance that established limits are adequately protective. It contributes to the development of a probabilistic noncancer risk assessment framework and will ultimately lead to the unification of cancer and noncancer risk assessment methodologies.

  19. Analysis of laser fluorosensor systems for remote algae detection and quantification

    NASA Technical Reports Server (NTRS)

    Browell, E. V.

    1977-01-01

    The development and performance of single- and multiple-wavelength laser fluorosensor systems for use in the remote detection and quantification of algae are discussed. The appropriate equation for the fluorescence power received by a laser fluorosensor system is derived in detail. Experimental development of a single wavelength system and a four wavelength system, which selectively excites the algae contained in the four primary algal color groups, is reviewed, and test results are presented. A comprehensive error analysis is reported which evaluates the uncertainty in the remote determination of the chlorophyll a concentration contained in algae by single- and multiple-wavelength laser fluorosensor systems. Results of the error analysis indicate that the remote quantification of chlorophyll a by a laser fluorosensor system requires optimum excitation wavelength(s), remote measurement of marine attenuation coefficients, and supplemental instrumentation to reduce uncertainties in the algal fluorescence cross sections.

  20. New methodology for the analysis of volatile organic compounds (VOCs) in bioethanol by gas chromatography coupled to mass spectrometry

    NASA Astrophysics Data System (ADS)

    Campos, M. S. G.; Sarkis, J. E. S.

    2018-03-01

    The present study presents a new analytical methodology for the determination of 11 compounds present in ethanol samples through the gas chromatography coupled to mass spectrometry (GC-MS) technique using a medium polarity chromatography column composed of 6% cyanopropyl-phenyl and 94% dimethyl polysiloxane. The validation parameters were determined according to NBR ISO 17025:2005. The recovery rates of the studied compounds were 100.4% to 114.7%. The limits of quantification are between 2.4 mg.kg-1 and 5.8 mg.kg-1. The uncertainty of the measurement was estimate in circa of 8%.

  1. Current and anticipated uses of thermal-hydraulic codes in Germany

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teschendorff, V.; Sommer, F.; Depisch, F.

    1997-07-01

    In Germany, one third of the electrical power is generated by nuclear plants. ATHLET and S-RELAP5 are successfully applied for safety analyses of the existing PWR and BWR reactors and possible future reactors, e.g. EPR. Continuous development and assessment of thermal-hydraulic codes are necessary in order to meet present and future needs of licensing organizations, utilities, and vendors. Desired improvements include thermal-hydraulic models, multi-dimensional simulation, computational speed, interfaces to coupled codes, and code architecture. Real-time capability will be essential for application in full-scope simulators. Comprehensive code validation and quantification of uncertainties are prerequisites for future best-estimate analyses.

  2. Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biros, George

    Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less

  3. Uncertainty Quantification of Equilibrium Climate Sensitivity in CCSM4

    NASA Astrophysics Data System (ADS)

    Covey, C. C.; Lucas, D. D.; Tannahill, J.; Klein, R.

    2013-12-01

    Uncertainty in the global mean equilibrium surface warming due to doubled atmospheric CO2, as computed by a "slab ocean" configuration of the Community Climate System Model version 4 (CCSM4), is quantified using 1,039 perturbed-input-parameter simulations. The slab ocean configuration reduces the model's e-folding time when approaching an equilibrium state to ~5 years. This time is much less than for the full ocean configuration, consistent with the shallow depth of the upper well-mixed layer of the ocean represented by the "slab." Adoption of the slab ocean configuration requires the assumption of preset values for the convergence of ocean heat transport beneath the upper well-mixed layer. A standard procedure for choosing these values maximizes agreement with the full ocean version's simulation of the present-day climate when input parameters assume their default values. For each new set of input parameter values, we computed the change in ocean heat transport implied by a "Phase 1" model run in which sea surface temperatures and sea ice concentrations were set equal to present-day values. The resulting total ocean heat transport (= standard value + change implied by Phase 1 run) was then input into "Phase 2" slab ocean runs with varying values of atmospheric CO2. Our uncertainty estimate is based on Latin Hypercube sampling over expert-provided uncertainty ranges of N = 36 adjustable parameters in the atmosphere (CAM4) and sea ice (CICE4) components of CCSM4. Two-dimensional projections of our sampling distribution for the N(N-1)/2 possible pairs of input parameters indicate full coverage of the N-dimensional parameter space, including edges. We used a machine learning-based support vector regression (SVR) statistical model to estimate the probability density function (PDF) of equilibrium warming. This fitting procedure produces a PDF that is qualitatively consistent with the raw histogram of our CCSM4 results. Most of the values from the SVR statistical model are within ~0.1 K of the raw results, well below the inter-decile range inferred below. Independent validation of the fit indicates residual errors that are distributed about zero with a standard deviation of 0.17 K. Analysis of variance shows that the equilibrium warming in CCSM4 is mainly linear in parameter changes. Thus, in accord with the Central Limit Theorem of statistics, the PDF of the warming is approximately Gaussian, i.e. symmetric about its mean value (3.0 K). Since SVR allows for highly nonlinear fits, the symmetry is not an artifact of the fitting procedure. The 10-90 percentile range of the PDF is 2.6-3.4 K, consistent with earlier estimates from CCSM4 but narrower than estimates from other models, which sometimes produce a high-temperature asymmetric tail in the PDF. This work was performed under auspices of the US Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, and was funded by LLNL's Uncertainty Quantification Strategic Initiative (Laboratory Directed Research and Development Project 10-SI-013).

  4. Improved MICROBASE Product with Uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Meng

    The data set contains four primary microphysics, including liquid water content, ice water content, liquid effective radius, and ice effective radius. Bit QC and data quality QC are also calculated. Quantification of uncertainties (incorporating the work of Zhao et al. 2013) are included for all four microphysics.

  5. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    PubMed

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001). Taking errors into account, SAINT I would have required 24% more subjects than were randomized. We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  6. A Monte-Carlo Bayesian framework for urban rainfall error modelling

    NASA Astrophysics Data System (ADS)

    Ochoa Rodriguez, Susana; Wang, Li-Pen; Willems, Patrick; Onof, Christian

    2016-04-01

    Rainfall estimates of the highest possible accuracy and resolution are required for urban hydrological applications, given the small size and fast response which characterise urban catchments. While significant progress has been made in recent years towards meeting rainfall input requirements for urban hydrology -including increasing use of high spatial resolution radar rainfall estimates in combination with point rain gauge records- rainfall estimates will never be perfect and the true rainfall field is, by definition, unknown [1]. Quantifying the residual errors in rainfall estimates is crucial in order to understand their reliability, as well as the impact that their uncertainty may have in subsequent runoff estimates. The quantification of errors in rainfall estimates has been an active topic of research for decades. However, existing rainfall error models have several shortcomings, including the fact that they are limited to describing errors associated to a single data source (i.e. errors associated to rain gauge measurements or radar QPEs alone) and to a single representative error source (e.g. radar-rain gauge differences, spatial temporal resolution). Moreover, rainfall error models have been mostly developed for and tested at large scales. Studies at urban scales are mostly limited to analyses of propagation of errors in rain gauge records-only through urban drainage models and to tests of model sensitivity to uncertainty arising from unmeasured rainfall variability. Only few radar rainfall error models -originally developed for large scales- have been tested at urban scales [2] and have been shown to fail to well capture small-scale storm dynamics, including storm peaks, which are of utmost important for urban runoff simulations. In this work a Monte-Carlo Bayesian framework for rainfall error modelling at urban scales is introduced, which explicitly accounts for relevant errors (arising from insufficient accuracy and/or resolution) in multiple data sources (in this case radar and rain gauge estimates typically available at present), while at the same time enabling dynamic combination of these data sources (thus not only quantifying uncertainty, but also reducing it). This model generates an ensemble of merged rainfall estimates, which can then be used as input to urban drainage models in order to examine how uncertainties in rainfall estimates propagate to urban runoff estimates. The proposed model is tested using as case study a detailed rainfall and flow dataset, and a carefully verified urban drainage model of a small (~9 km2) pilot catchment in North-East London. The model has shown to well characterise residual errors in rainfall data at urban scales (which remain after the merging), leading to improved runoff estimates. In fact, the majority of measured flow peaks are bounded within the uncertainty area produced by the runoff ensembles generated with the ensemble rainfall inputs. REFERENCES: [1] Ciach, G. J. & Krajewski, W. F. (1999). On the estimation of radar rainfall error variance. Advances in Water Resources, 22 (6), 585-595. [2] Rico-Ramirez, M. A., Liguori, S. & Schellart, A. N. A. (2015). Quantifying radar-rainfall uncertainties in urban drainage flow modelling. Journal of Hydrology, 528, 17-28.

  7. The NEWS Water Cycle Climatology

    NASA Astrophysics Data System (ADS)

    Rodell, M.; Beaudoing, H. K.; L'Ecuyer, T.; Olson, W. S.

    2012-12-01

    NASA's Energy and Water Cycle Study (NEWS) program fosters collaborative research towards improved quantification and prediction of water and energy cycle consequences of climate change. In order to measure change, it is first necessary to describe current conditions. The goal of the first phase of the NEWS Water and Energy Cycle Climatology project was to develop "state of the global water cycle" and "state of the global energy cycle" assessments based on data from modern ground and space based observing systems and data integrating models. The project was a multi-institutional collaboration with more than 20 active contributors. This presentation will describe the results of the water cycle component of the first phase of the project, which include seasonal (monthly) climatologies of water fluxes over land, ocean, and atmosphere at continental and ocean basin scales. The requirement of closure of the water budget (i.e., mass conservation) at various scales was exploited to constrain the flux estimates via an optimization approach that will also be described. Further, error assessments were included with the input datasets, and we examine these in relation to inferred uncertainty in the optimized flux estimates in order to gauge our current ability to close the water budget within an expected uncertainty range.

  8. The NEWS Water Cycle Climatology

    NASA Technical Reports Server (NTRS)

    Rodell, Matthew; Beaudoing, Hiroko Kato; L'Ecuyer, Tristan; William, Olson

    2012-01-01

    NASA's Energy and Water Cycle Study (NEWS) program fosters collaborative research towards improved quantification and prediction of water and energy cycle consequences of climate change. In order to measure change, it is first necessary to describe current conditions. The goal of the first phase of the NEWS Water and Energy Cycle Climatology project was to develop "state of the global water cycle" and "state of the global energy cycle" assessments based on data from modern ground and space based observing systems and data integrating models. The project was a multi-institutional collaboration with more than 20 active contributors. This presentation will describe the results of the water cycle component of the first phase of the project, which include seasonal (monthly) climatologies of water fluxes over land, ocean, and atmosphere at continental and ocean basin scales. The requirement of closure of the water budget (i.e., mass conservation) at various scales was exploited to constrain the flux estimates via an optimization approach that will also be described. Further, error assessments were included with the input datasets, and we examine these in relation to inferred uncertainty in the optimized flux estimates in order to gauge our current ability to close the water budget within an expected uncertainty range.

  9. Meeting report: Estimating the benefits of reducing hazardous air pollutants--summary of 2009 workshop and future considerations.

    PubMed

    Gwinn, Maureen R; Craig, Jeneva; Axelrad, Daniel A; Cook, Rich; Dockins, Chris; Fann, Neal; Fegley, Robert; Guinnup, David E; Helfand, Gloria; Hubbell, Bryan; Mazur, Sarah L; Palma, Ted; Smith, Roy L; Vandenberg, John; Sonawane, Babasaheb

    2011-01-01

    Quantifying the benefits of reducing hazardous air pollutants (HAPs, or air toxics) has been limited by gaps in toxicological data, uncertainties in extrapolating results from high-dose animal experiments to estimate human effects at lower doses, limited ambient and personal exposure monitoring data, and insufficient economic research to support valuation of the health impacts often associated with exposure to individual air toxics. To address some of these issues, the U.S. Environmental Protection Agency held the Workshop on Estimating the Benefits of Reducing Hazardous Air Pollutants (HAPs) in Washington, DC, from 30 April to 1 May 2009. Experts from multiple disciplines discussed how best to move forward on air toxics benefits assessment, with a focus on developing near-term capability to conduct quantitative benefits assessment. Proposed methodologies involved analysis of data-rich pollutants and application of this analysis to other pollutants, using dose-response modeling of animal data for estimating benefits to humans, determining dose-equivalence relationships for different chemicals with similar health effects, and analysis similar to that used for criteria pollutants. Limitations and uncertainties in economic valuation of benefits assessment for HAPS were discussed as well. These discussions highlighted the complexities in estimating the benefits of reducing air toxics, and participants agreed that alternative methods for benefits assessment of HAPs are needed. Recommendations included clearly defining the key priorities of the Clean Air Act air toxics program to identify the most effective approaches for HAPs benefits analysis, focusing on susceptible and vulnerable populations, and improving dose-response estimation for quantification of benefits.

  10. Multi-Scale Validation of a Nanodiamond Drug Delivery System and Multi-Scale Engineering Education

    ERIC Educational Resources Information Center

    Schwalbe, Michelle Kristin

    2010-01-01

    This dissertation has two primary concerns: (i) evaluating the uncertainty and prediction capabilities of a nanodiamond drug delivery model using Bayesian calibration and bias correction, and (ii) determining conceptual difficulties of multi-scale analysis from an engineering education perspective. A Bayesian uncertainty quantification scheme…

  11. Uncertainty Quantification in High Throughput Screening ...

    EPA Pesticide Factsheets

    Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of biochemical and cellular processes, including endocrine disruption, cytotoxicity, and zebrafish development. Over 2.6 million concentration response curves are fit to models to extract parameters related to potency and efficacy. Models built on ToxCast results are being used to rank and prioritize the toxicological risk of tested chemicals and to predict the toxicity of tens of thousands of chemicals not yet tested in vivo. However, the data size also presents challenges. When fitting the data, the choice of models, model selection strategy, and hit call criteria must reflect the need for computational efficiency and robustness, requiring hard and somewhat arbitrary cutoffs. When coupled with unavoidable noise in the experimental concentration response data, these hard cutoffs cause uncertainty in model parameters and the hit call itself. The uncertainty will then propagate through all of the models built on the data. Left unquantified, this uncertainty makes it difficult to fully interpret the data for risk assessment. We used bootstrap resampling methods to quantify the uncertainty in fitting models to the concentration response data. Bootstrap resampling determines confidence intervals for

  12. Model-Data Fusion and Adaptive Sensing for Large Scale Systems: Applications to Atmospheric Release Incidents

    NASA Astrophysics Data System (ADS)

    Madankan, Reza

    All across the world, toxic material clouds are emitted from sources, such as industrial plants, vehicular traffic, and volcanic eruptions can contain chemical, biological or radiological material. With the growing fear of natural, accidental or deliberate release of toxic agents, there is tremendous interest in precise source characterization and generating accurate hazard maps of toxic material dispersion for appropriate disaster management. In this dissertation, an end-to-end framework has been developed for probabilistic source characterization and forecasting of atmospheric release incidents. The proposed methodology consists of three major components which are combined together to perform the task of source characterization and forecasting. These components include Uncertainty Quantification, Optimal Information Collection, and Data Assimilation. Precise approximation of prior statistics is crucial to ensure performance of the source characterization process. In this work, an efficient quadrature based method has been utilized for quantification of uncertainty in plume dispersion models that are subject to uncertain source parameters. In addition, a fast and accurate approach is utilized for the approximation of probabilistic hazard maps, based on combination of polynomial chaos theory and the method of quadrature points. Besides precise quantification of uncertainty, having useful measurement data is also highly important to warranty accurate source parameter estimation. The performance of source characterization is highly affected by applied sensor orientation for data observation. Hence, a general framework has been developed for the optimal allocation of data observation sensors, to improve performance of the source characterization process. The key goal of this framework is to optimally locate a set of mobile sensors such that measurement of textit{better} data is guaranteed. This is achieved by maximizing the mutual information between model predictions and observed data, given a set of kinetic constraints on mobile sensors. Dynamic Programming method has been utilized to solve the resulting optimal control problem. To complete the loop of source characterization process, two different estimation techniques, minimum variance estimation framework and Bayesian Inference method has been developed to fuse model forecast with measurement data. Incomplete information regarding the distribution of associated noise signal in measurement data, is another major challenge in the source characterization of plume dispersion incidents. This frequently happens in data assimilation of atmospheric data by using the satellite imagery. This occurs due to the fact that satellite imagery data can be polluted with noise, depending on weather conditions, clouds, humidity, etc. Unfortunately, there is no accurate procedure to quantify the error in recorded satellite data. Hence, using classical data assimilation methods in this situation is not straight forward. In this dissertation, the basic idea of a novel approach has been proposed to tackle these types of real world problems with more accuracy and robustness. A simple example demonstrating the real-world scenario is presented to validate the developed methodology.

  13. Improvements in GRACE Gravity Field Determination through Stochastic Observation Modeling

    NASA Astrophysics Data System (ADS)

    McCullough, C.; Bettadpur, S. V.

    2016-12-01

    Current unconstrained Release 05 GRACE gravity field solutions from the Center for Space Research (CSR RL05) assume random observation errors following an independent multivariate Gaussian distribution. This modeling of observations, a simplifying assumption, fails to account for long period, correlated errors arising from inadequacies in the background force models. Fully modeling the errors inherent in the observation equations, through the use of a full observation covariance (modeling colored noise), enables optimal combination of GPS and inter-satellite range-rate data and obviates the need for estimating kinematic empirical parameters during the solution process. Most importantly, fully modeling the observation errors drastically improves formal error estimates of the spherical harmonic coefficients, potentially enabling improved uncertainty quantification of scientific results derived from GRACE and optimizing combinations of GRACE with independent data sets and a priori constraints.

  14. Polynomial chaos representation of databases on manifolds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu

    2017-04-15

    Characterizing the polynomial chaos expansion (PCE) of a vector-valued random variable with probability distribution concentrated on a manifold is a relevant problem in data-driven settings. The probability distribution of such random vectors is multimodal in general, leading to potentially very slow convergence of the PCE. In this paper, we build on a recent development for estimating and sampling from probabilities concentrated on a diffusion manifold. The proposed methodology constructs a PCE of the random vector together with an associated generator that samples from the target probability distribution which is estimated from data concentrated in the neighborhood of the manifold. Themore » method is robust and remains efficient for high dimension and large datasets. The resulting polynomial chaos construction on manifolds permits the adaptation of many uncertainty quantification and statistical tools to emerging questions motivated by data-driven queries.« less

  15. The Variable Grid Method, an Approach for the Simultaneous Visualization and Assessment of Spatial Trends and Uncertainty

    NASA Astrophysics Data System (ADS)

    Rose, K.; Glosser, D.; Bauer, J. R.; Barkhurst, A.

    2015-12-01

    The products of spatial analyses that leverage the interpolation of sparse, point data to represent continuous phenomena are often presented without clear explanations of the uncertainty associated with the interpolated values. As a result, there is frequently insufficient information provided to effectively support advanced computational analyses and individual research and policy decisions utilizing these results. This highlights the need for a reliable approach capable of quantitatively producing and communicating spatial data analyses and their inherent uncertainties for a broad range of uses. To address this need, we have developed the Variable Grid Method (VGM), and associated Python tool, which is a flexible approach that can be applied to a variety of analyses and use case scenarios where users need a method to effectively study, evaluate, and analyze spatial trends and patterns while communicating the uncertainty in the underlying spatial datasets. The VGM outputs a simultaneous visualization representative of the spatial data analyses and quantification of underlying uncertainties, which can be calculated using data related to sample density, sample variance, interpolation error, uncertainty calculated from multiple simulations, etc. We will present examples of our research utilizing the VGM to quantify key spatial trends and patterns for subsurface data interpolations and their uncertainties and leverage these results to evaluate storage estimates and potential impacts associated with underground injection for CO2 storage and unconventional resource production and development. The insights provided by these examples identify how the VGM can provide critical information about the relationship between uncertainty and spatial data that is necessary to better support their use in advance computation analyses and informing research, management and policy decisions.

  16. Forensic Uncertainty Quantification of Explosive Dispersal of Particles

    NASA Astrophysics Data System (ADS)

    Hughes, Kyle; Park, Chanyoung; Haftka, Raphael; Kim, Nam-Ho

    2017-06-01

    In addition to the numerical challenges of simulating the explosive dispersal of particles, validation of the simulation is often plagued with poor knowledge of the experimental conditions. The level of experimental detail required for validation is beyond what is usually included in the literature. This presentation proposes the use of forensic uncertainty quantification (UQ) to investigate validation-quality experiments to discover possible sources of uncertainty that may have been missed in initial design of experiments or under-reported. The current experience of the authors has found that by making an analogy to crime scene investigation when looking at validation experiments, valuable insights may be gained. One examines all the data and documentation provided by the validation experimentalists, corroborates evidence, and quantifies large sources of uncertainty a posteriori with empirical measurements. In addition, it is proposed that forensic UQ may benefit from an independent investigator to help remove possible implicit biases and increases the likelihood of discovering unrecognized uncertainty. Forensic UQ concepts will be discussed and then applied to a set of validation experiments performed at Eglin Air Force Base. This work was supported in part by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program.

  17. Waveform-based Bayesian full moment tensor inversion and uncertainty determination for the induced seismicity in an oil/gas field

    NASA Astrophysics Data System (ADS)

    Gu, Chen; Marzouk, Youssef M.; Toksöz, M. Nafi

    2018-03-01

    Small earthquakes occur due to natural tectonic motions and are induced by oil and gas production processes. In many oil/gas fields and hydrofracking processes, induced earthquakes result from fluid extraction or injection. The locations and source mechanisms of these earthquakes provide valuable information about the reservoirs. Analysis of induced seismic events has mostly assumed a double-couple source mechanism. However, recent studies have shown a non-negligible percentage of non-double-couple components of source moment tensors in hydraulic fracturing events, assuming a full moment tensor source mechanism. Without uncertainty quantification of the moment tensor solution, it is difficult to determine the reliability of these source models. This study develops a Bayesian method to perform waveform-based full moment tensor inversion and uncertainty quantification for induced seismic events, accounting for both location and velocity model uncertainties. We conduct tests with synthetic events to validate the method, and then apply our newly developed Bayesian inversion approach to real induced seismicity in an oil/gas field in the sultanate of Oman—determining the uncertainties in the source mechanism and in the location of that event.

  18. Uncertainty quantification and optimal decisions

    PubMed Central

    2017-01-01

    A mathematical model can be analysed to construct policies for action that are close to optimal for the model. If the model is accurate, such policies will be close to optimal when implemented in the real world. In this paper, the different aspects of an ideal workflow are reviewed: modelling, forecasting, evaluating forecasts, data assimilation and constructing control policies for decision-making. The example of the oil industry is used to motivate the discussion, and other examples, such as weather forecasting and precision agriculture, are used to argue that the same mathematical ideas apply in different contexts. Particular emphasis is placed on (i) uncertainty quantification in forecasting and (ii) how decisions are optimized and made robust to uncertainty in models and judgements. This necessitates full use of the relevant data and by balancing costs and benefits into the long term may suggest policies quite different from those relevant to the short term. PMID:28484343

  19. Stochastic collocation using Kronrod-Patterson-Hermite quadrature with moderate delay for subsurface flow and transport

    NASA Astrophysics Data System (ADS)

    Liao, Q.; Tchelepi, H.; Zhang, D.

    2015-12-01

    Uncertainty quantification aims at characterizing the impact of input parameters on the output responses and plays an important role in many areas including subsurface flow and transport. In this study, a sparse grid collocation approach, which uses a nested Kronrod-Patterson-Hermite quadrature rule with moderate delay for Gaussian random parameters, is proposed to quantify the uncertainty of model solutions. The conventional stochastic collocation method serves as a promising non-intrusive approach and has drawn a great deal of interests. The collocation points are usually chosen to be Gauss-Hermite quadrature nodes, which are naturally unnested. The Kronrod-Patterson-Hermite nodes are shown to be more efficient than the Gauss-Hermite nodes due to nestedness. We propose a Kronrod-Patterson-Hermite rule with moderate delay to further improve the performance. Our study demonstrates the effectiveness of the proposed method for uncertainty quantification through subsurface flow and transport examples.

  20. A DDDAS Framework for Volcanic Ash Propagation and Hazard Analysis

    DTIC Science & Technology

    2012-01-01

    probability distribution for the input variables (for example, Hermite polynomials for normally distributed parameters, or Legendre for uniformly...parameters and windfields will drive our simulations. We will use uncertainty quantification methodology – polynomial chaos quadrature in combination...quantification methodology ? polynomial chaos quadrature in combination with data integration to complete the DDDAS loop. 15. SUBJECT TERMS 16. SECURITY

  1. CCQM-K102: polybrominated diphenyl ethers in sediment

    NASA Astrophysics Data System (ADS)

    Ricci, Marina; Shegunova, Penka; Conneely, Patrick; Becker, Roland; Maldonado Torres, Mauricio; Arce Osuna, Mariana; On, Tang Po; Man, Lee Ho; Baek, Song-Yee; Kim, Byungjoo; Hopley, Christopher; Liscio, Camilla; Warren, John; Le Diouron, Véronique; Lardy-Fontan, Sophie; Lalere, Béatrice; Mingwu, Shao; Kucklick, John; Vamathevan, Veronica; Matsuyama, Shigetomo; Numata, Masahiko; Brits, Martin; Quinn, Laura; Fernandes-Whaley, Maria; Ceyhan Gören, Ahmet; Binici, Burcu; Konopelko, Leonid; Krylov, Anatoli; Mikheeva, Alena

    2017-01-01

    The key comparison CCQM-K102: Polybrominated diphenyl ethers in sediment was coordinated by the JRC, Directorate F - Health, Consumers & Reference Materials, Geel (Belgium) under the auspices of the Organic Analysis Working Group (OAWG) of the Comité Consultatif pour la Quantité de Matière (CCQM). Thirteen National Metrology institutes or Designated Institutes and the JRC participated. Participants were requested to report the mass fraction (on a dry mass basis) of BDE 47, 99 and 153 in the freshwater sediment study material. The sediment originated from a river in Belgium and contained PBDEs (and other pollutants) at levels commonly found in environmental samples. The comparison was designed to demonstrate participants' capability of analysing non-polar organic molecules in abiotic dried matrices (approximate range of molecular weights: 100 to 800 g/mol, polarity corresponding to pKow < -2, range of mass fraction: 1-1000 μg/kg). All participants (except one using ultrasonic extraction) applied Pressurised Liquid Extraction or Soxhlet, while the instrumental analysis was performed with GC-MS/MS, GC-MS or GC-HRMS. Isotope Dilution Mass Spectrometry approach was used for quantification (except in one case). The assigned Key Comparison Reference Values (KCRVs) were the medians of thirteen results for BDE 47 and eleven results for BDE 99 and 153, respectively. BDE 47 was assigned a KCRV of 15.60 μg/kg with a combined standard uncertainty of 0.41 μg/kg, BDE 99 was assigned a KCRV of 33.69 μg/kg with a combined standard uncertainty of 0.81 μg/kg and BDE 153 was assigned a KCRV of 6.28 μg/kg with a combined standard uncertainty of 0.28 μg/kg. The k-factor for the estimation of the expanded uncertainty of the KCRVs was chosen as k = 2. The degree of equivalence (with the KCRV) and its uncertainty were calculated for each result. Most of the participants to CCQM-K102 were able to demonstrate or confirm their capabilities in the analysis of non-polar organic molecules in abiotic dried matrices. Throughout the study it became clear that matrix interferences can influence the accurate quantification of the PBDEs, if the analytical methodology applied is not appropriately adapted and optimised. This comparison shows that quantification of PBDEs at the μg/kg low-middle range in a challenging environmental abiotic dried matrix can be achieved with relative expanded uncertainties below 15 % (more than 70 % of participating laboratories), well in line with the best measurement performances in the environmental analysis field. Main text To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCQM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  2. Bayesian deconvolution and quantification of metabolites in complex 1D NMR spectra using BATMAN.

    PubMed

    Hao, Jie; Liebeke, Manuel; Astle, William; De Iorio, Maria; Bundy, Jacob G; Ebbels, Timothy M D

    2014-01-01

    Data processing for 1D NMR spectra is a key bottleneck for metabolomic and other complex-mixture studies, particularly where quantitative data on individual metabolites are required. We present a protocol for automated metabolite deconvolution and quantification from complex NMR spectra by using the Bayesian automated metabolite analyzer for NMR (BATMAN) R package. BATMAN models resonances on the basis of a user-controllable set of templates, each of which specifies the chemical shifts, J-couplings and relative peak intensities for a single metabolite. Peaks are allowed to shift position slightly between spectra, and peak widths are allowed to vary by user-specified amounts. NMR signals not captured by the templates are modeled non-parametrically by using wavelets. The protocol covers setting up user template libraries, optimizing algorithmic input parameters, improving prior information on peak positions, quality control and evaluation of outputs. The outputs include relative concentration estimates for named metabolites together with associated Bayesian uncertainty estimates, as well as the fit of the remainder of the spectrum using wavelets. Graphical diagnostics allow the user to examine the quality of the fit for multiple spectra simultaneously. This approach offers a workflow to analyze large numbers of spectra and is expected to be useful in a wide range of metabolomics studies.

  3. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    PubMed

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  4. Greenhouse Gas Sensing Using Small Unmanned Aerial Systems - Field Experiment Results and Future Directions

    NASA Astrophysics Data System (ADS)

    Aubrey, A. D.; Christensen, L. E.; Brockers, R.; Thompson, D. R.

    2014-12-01

    Requirements for greenhouse gas point source detection and quantification often require high spatial resolution on the order of meters. These applications, which help close the gap in emissions estimate uncertainties, also demand sensing with high sensitivity and in a fashion that accounts for spatiotemporal variability on the order of seconds to minutes. Low-cost vertical takeoff and landing (VTOL) small unmanned aerial systems (sUAS) provide a means to detect and identify the location of point source gas emissions while offering ease of deployment and high maneuverability. Our current fielded gas sensing sUAS platforms are able to provide instantaneous in situ concentration measurements at locations within line of sight of the operator. Recent results from field experiments demonstrating methane detection and plume characterization will be discussed here, including performance assessment conducted via a controlled release experiment in 2013. The logical extension of sUAS gas concentration measurement is quantification of flux rate. We will discuss the preliminary strategy for quantitative flux determination, including intrinsic challenges and heritage from airborne science campaigns, associated with this point source flux quantification. This system approach forms the basis for intelligent autonomous quantitative characterization of gas plumes, which holds great value for applications in commercial, regulatory, and safety environments.

  5. Risk-based decision making to manage water quality failures caused by combined sewer overflows

    NASA Astrophysics Data System (ADS)

    Sriwastava, A. K.; Torres-Matallana, J. A.; Tait, S.; Schellart, A.

    2017-12-01

    Regulatory authorities set certain environmental permit for water utilities such that the combined sewer overflows (CSO) managed by these companies conform to the regulations. These utility companies face the risk of paying penalty or negative publicity in case they breach the environmental permit. These risks can be addressed by designing appropriate solutions such as investing in additional infrastructure which improve the system capacity and reduce the impact of CSO spills. The performance of these solutions is often estimated using urban drainage models. Hence, any uncertainty in these models can have a significant effect on the decision making process. This study outlines a risk-based decision making approach to address water quality failure caused by CSO spills. A calibrated lumped urban drainage model is used to simulate CSO spill quality in Haute-Sûre catchment in Luxembourg. Uncertainty in rainfall and model parameters is propagated through Monte Carlo simulations to quantify uncertainty in the concentration of ammonia in the CSO spill. A combination of decision alternatives such as the construction of a storage tank at the CSO and the reduction in the flow contribution of catchment surfaces are selected as planning measures to avoid the water quality failure. Failure is defined as exceedance of a concentration-duration based threshold based on Austrian emission standards for ammonia (De Toffol, 2006) with a certain frequency. For each decision alternative, uncertainty quantification results into a probability distribution of the number of annual CSO spill events which exceed the threshold. For each alternative, a buffered failure probability as defined in Rockafellar & Royset (2010), is estimated. Buffered failure probability (pbf) is a conservative estimate of failure probability (pf), however, unlike failure probability, it includes information about the upper tail of the distribution. A pareto-optimal set of solutions is obtained by performing mean- pbf optimization. The effectiveness of using buffered failure probability compared to the failure probability is tested by comparing the solutions obtained by using mean-pbf and mean-pf optimizations.

  6. Advanced NASA Earth Science Mission Concept for Vegetation 3D Structure, Biomass and Disturbance

    NASA Technical Reports Server (NTRS)

    Ranson, K. Jon

    2007-01-01

    Carbon in forest canopies represents about 85% of the total carbon in the Earth's aboveground biomass (Olson et al., 1983). A major source of uncertainty in global carbon budgets derives from large errors in the current estimates of these carbon stocks (IPCC, 2001). The magnitudes and distributions of terrestrial carbon storage along with changes in sources and sinks for atmospheric C02 due to land use change remain the most significant uncertainties in Earth's carbon budget. These uncertainties severely limit accurate terrestrial carbon accounting; our ability to evaluate terrestrial carbon management schemes; and the veracity of atmospheric C02 projections in response to further fossil fuel combustion and other human activities. Measurements of vegetation three-dimensional (3D) structural characteristics over the Earth's land surface are needed to estimate biomass and carbon stocks and to quantify biomass recovery following disturbance. These measurements include vegetation height, the vertical profile of canopy elements (i.e., leaves, stems, branches), andlor the volume scattering of canopy elements. They are critical for reducing uncertainties in the global carbon budget. Disturbance by natural phenomena, such as fire or wind, as well as by human activities, such as forest harvest, and subsequent recovery, complicate the quantification of carbon storage and release. The resulting spatial and temporal heterogeneity of terrestrial biomass and carbon in vegetation make it very difficult to estimate terrestrial carbon stocks and quantify their dynamics. Vegetation height profiles and disturbance recovery patterns are also required to assess ecosystem health and characterize habitat. The three-dimensional structure of vegetation provides habitats for many species and is a control on biodiversity. Canopy height and structure influence habitat use and specialization, two fundamental processes that modify species richness and abundance across ecosystems. Accurate and consistent 3D measurements of forest structure at the landscape scale are needed for assessing impacts to animal habitats and biodiversity following disturbance.

  7. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  8. A hierarchical spatial model for well yield in complex aquifers

    NASA Astrophysics Data System (ADS)

    Montgomery, J.; O'sullivan, F.

    2017-12-01

    Efficiently siting and managing groundwater wells requires reliable estimates of the amount of water that can be produced, or the well yield. This can be challenging to predict in highly complex, heterogeneous fractured aquifers due to the uncertainty around local hydraulic properties. Promising statistical approaches have been advanced in recent years. For instance, kriging and multivariate regression analysis have been applied to well test data with limited but encouraging levels of prediction accuracy. Additionally, some analytical solutions to diffusion in homogeneous porous media have been used to infer "effective" properties consistent with observed flow rates or drawdown. However, this is an under-specified inverse problem with substantial and irreducible uncertainty. We describe a flexible machine learning approach capable of combining diverse datasets with constraining physical and geostatistical models for improved well yield prediction accuracy and uncertainty quantification. Our approach can be implemented within a hierarchical Bayesian framework using Markov Chain Monte Carlo, which allows for additional sources of information to be incorporated in priors to further constrain and improve predictions and reduce the model order. We demonstrate the usefulness of this approach using data from over 7,000 wells in a fractured bedrock aquifer.

  9. Multi-fidelity methods for uncertainty quantification in transport problems

    NASA Astrophysics Data System (ADS)

    Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.

    2016-12-01

    We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.

  10. High-Throughput Thermodynamic Modeling and Uncertainty Quantification for ICME

    NASA Astrophysics Data System (ADS)

    Otis, Richard A.; Liu, Zi-Kui

    2017-05-01

    One foundational component of the integrated computational materials engineering (ICME) and Materials Genome Initiative is the computational thermodynamics based on the calculation of phase diagrams (CALPHAD) method. The CALPHAD method pioneered by Kaufman has enabled the development of thermodynamic, atomic mobility, and molar volume databases of individual phases in the full space of temperature, composition, and sometimes pressure for technologically important multicomponent engineering materials, along with sophisticated computational tools for using the databases. In this article, our recent efforts will be presented in terms of developing new computational tools for high-throughput modeling and uncertainty quantification based on high-throughput, first-principles calculations and the CALPHAD method along with their potential propagations to downstream ICME modeling and simulations.

  11. Preliminary Results on Uncertainty Quantification for Pattern Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stracuzzi, David John; Brost, Randolph; Chen, Maximillian Gene

    2015-09-01

    This report summarizes preliminary research into uncertainty quantification for pattern ana- lytics within the context of the Pattern Analytics to Support High-Performance Exploitation and Reasoning (PANTHER) project. The primary focus of PANTHER was to make large quantities of remote sensing data searchable by analysts. The work described in this re- port adds nuance to both the initial data preparation steps and the search process. Search queries are transformed from does the specified pattern exist in the data? to how certain is the system that the returned results match the query? We show example results for both data processing and search,more » and discuss a number of possible improvements for each.« less

  12. Uncertainty, ensembles and air quality dispersion modeling: applications and challenges

    NASA Astrophysics Data System (ADS)

    Dabberdt, Walter F.; Miller, Erik

    The past two decades have seen significant advances in mesoscale meteorological modeling research and applications, such as the development of sophisticated and now widely used advanced mesoscale prognostic models, large eddy simulation models, four-dimensional data assimilation, adjoint models, adaptive and targeted observational strategies, and ensemble and probabilistic forecasts. Some of these advances are now being applied to urban air quality modeling and applications. Looking forward, it is anticipated that the high-priority air quality issues for the near-to-intermediate future will likely include: (1) routine operational forecasting of adverse air quality episodes; (2) real-time high-level support to emergency response activities; and (3) quantification of model uncertainty. Special attention is focused here on the quantification of model uncertainty through the use of ensemble simulations. Application to emergency-response dispersion modeling is illustrated using an actual event that involved the accidental release of the toxic chemical oleum. Both surface footprints of mass concentration and the associated probability distributions at individual receptors are seen to provide valuable quantitative indicators of the range of expected concentrations and their associated uncertainty.

  13. On uncertainty quantification of lithium-ion batteries: Application to an LiC6/LiCoO2 cell

    NASA Astrophysics Data System (ADS)

    Hadigol, Mohammad; Maute, Kurt; Doostan, Alireza

    2015-12-01

    In this work, a stochastic, physics-based model for Lithium-ion batteries (LIBs) is presented in order to study the effects of parametric model uncertainties on the cell capacity, voltage, and concentrations. To this end, the proposed uncertainty quantification (UQ) approach, based on sparse polynomial chaos expansions, relies on a small number of battery simulations. Within this UQ framework, the identification of most important uncertainty sources is achieved by performing a global sensitivity analysis via computing the so-called Sobol' indices. Such information aids in designing more efficient and targeted quality control procedures, which consequently may result in reducing the LIB production cost. An LiC6/LiCoO2 cell with 19 uncertain parameters discharged at 0.25C, 1C and 4C rates is considered to study the performance and accuracy of the proposed UQ approach. The results suggest that, for the considered cell, the battery discharge rate is a key factor affecting not only the performance variability of the cell, but also the determination of most important random inputs.

  14. Evaluating Precipitation from Orbital Data Products of TRMM and GPM over the Indian Subcontinent

    NASA Astrophysics Data System (ADS)

    Jayaluxmi, I.; Kumar, D. N.

    2015-12-01

    The rapidly growing records of microwave based precipitation data made available from various earth observation satellites have instigated a pressing need towards evaluating the associated uncertainty which arise from different sources such as retrieval error, spatial/temporal sampling error and sensor dependent error. Pertaining to microwave remote sensing, most of the studies in literature focus on gridded data products, fewer studies exist on evaluating the uncertainty inherent in orbital data products. Evaluation of the latter are essential as they potentially cause large uncertainties during real time flood forecasting studies especially at the watershed scale. The present study evaluates the uncertainty of precipitation data derived from the orbital data products of the Tropical Rainfall Measuring Mission (TRMM) satellite namely the 2A12, 2A25 and 2B31 products. Case study results over the flood prone basin of Mahanadi, India, are analyzed for precipitation uncertainty through these three facets viz., a) Uncertainty quantification using the volumetric metrics from the contingency table [Aghakouchak and Mehran 2014] b) Error characterization using additive and multiplicative error models c) Error decomposition to identify systematic and random errors d) Comparative assessment with the orbital data from GPM mission. The homoscedastic random errors from multiplicative error models justify a better representation of precipitation estimates by the 2A12 algorithm. It can be concluded that although the radiometer derived 2A12 precipitation data is known to suffer from many sources of uncertainties, spatial analysis over the case study region of India testifies that they are in excellent agreement with the reference estimates for the data period considered [Indu and Kumar 2015]. References A. AghaKouchak and A. Mehran (2014), Extended contingency table: Performance metrics for satellite observations and climate model simulations, Water Resources Research, vol. 49, 7144-7149; J. Indu and D. Nagesh Kumar (2015), Evaluation of Precipitation Retrievals from Orbital Data Products of TRMM over a Subtropical basin in India, IEEE Transactions on Geoscience and Remote Sensing, in press, doi: 10.1109/TGRS.2015.2440338.

  15. Uncertainty quantification based on pillars of experiment, theory, and computation. Part I: Data analysis

    NASA Astrophysics Data System (ADS)

    Elishakoff, I.; Sarlin, N.

    2016-06-01

    In this paper we provide a general methodology of analysis and design of systems involving uncertainties. Available experimental data is enclosed by some geometric figures (triangle, rectangle, ellipse, parallelogram, super ellipse) of minimum area. Then these areas are inflated resorting to the Chebyshev inequality in order to take into account the forecasted data. Next step consists in evaluating response of system when uncertainties are confined to one of the above five suitably inflated geometric figures. This step involves a combined theoretical and computational analysis. We evaluate the maximum response of the system subjected to variation of uncertain parameters in each hypothesized region. The results of triangular, interval, ellipsoidal, parallelogram, and super ellipsoidal calculi are compared with the view of identifying the region that leads to minimum of maximum response. That response is identified as a result of the suggested predictive inference. The methodology thus synthesizes probabilistic notion with each of the five calculi. Using the term "pillar" in the title was inspired by the News Release (2013) on according Honda Prize to J. Tinsley Oden, stating, among others, that "Dr. Oden refers to computational science as the "third pillar" of scientific inquiry, standing beside theoretical and experimental science. Computational science serves as a new paradigm for acquiring knowledge and informing decisions important to humankind". Analysis of systems with uncertainties necessitates employment of all three pillars. The analysis is based on the assumption that that the five shapes are each different conservative estimates of the true bounding region. The smallest of the maximal displacements in x and y directions (for a 2D system) therefore provides the closest estimate of the true displacements based on the above assumption.

  16. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Schröter, Kai; Merz, Bruno

    2016-05-01

    Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  17. Delineating parameter unidentifiabilities in complex models

    NASA Astrophysics Data System (ADS)

    Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis

    2017-03-01

    Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.

  18. Determination of methylmercury in marine biota samples with advanced mercury analyzer: method validation.

    PubMed

    Azemard, Sabine; Vassileva, Emilia

    2015-06-01

    In this paper, we present a simple, fast and cost-effective method for determination of methyl mercury (MeHg) in marine samples. All important parameters influencing the sample preparation process were investigated and optimized. Full validation of the method was performed in accordance to the ISO-17025 (ISO/IEC, 2005) and Eurachem guidelines. Blanks, selectivity, working range (0.09-3.0ng), recovery (92-108%), intermediate precision (1.7-4.5%), traceability, limit of detection (0.009ng), limit of quantification (0.045ng) and expanded uncertainty (15%, k=2) were assessed. Estimation of the uncertainty contribution of each parameter and the demonstration of traceability of measurement results was provided as well. Furthermore, the selectivity of the method was studied by analyzing the same sample extracts by advanced mercury analyzer (AMA) and gas chromatography-atomic fluorescence spectrometry (GC-AFS). Additional validation of the proposed procedure was effectuated by participation in the IAEA-461 worldwide inter-laboratory comparison exercises. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Evaluating the Global Precipitation Measurement mission with NOAA/NSSL Multi-Radar Multisensor: current status and future directions.

    NASA Astrophysics Data System (ADS)

    Kirstetter, P. E.; Petersen, W. A.; Gourley, J. J.; Kummerow, C.; Huffman, G. J.; Turk, J.; Tanelli, S.; Maggioni, V.; Anagnostou, E. N.; Hong, Y.; Schwaller, M.

    2017-12-01

    Accurate characterization of uncertainties in space-borne precipitation estimates is critical for many applications including water budget studies or prediction of natural hazards at the global scale. The GPM precipitation Level II (active and passive) and Level III (IMERG) estimates are compared to the high quality and high resolution NEXRAD-based precipitation estimates derived from the NOAA/NSSL's Multi-Radar, Multi-Sensor (MRMS) platform. A surface reference is derived from the MRMS suite of products to be accurate with known uncertainty bounds and measured at a resolution below the pixel sizes of any GPM estimate, providing great flexibility in matching to grid scales or footprints. It provides an independent and consistent reference research framework for directly evaluating GPM precipitation products across a large number of meteorological regimes as a function of resolution, accuracy and sample size. The consistency of the ground and space-based sensors in term of precipitation detection, typology and quantification are systematically evaluated. Satellite precipitation retrievals are further investigated in terms of precipitation distributions, systematic biases and random errors, influence of precipitation sub-pixel variability and comparison between satellite products. Prognostic analysis directly provides feedback to algorithm developers on how to improve the satellite estimates. Specific factors for passive (e.g. surface conditions for GMI) and active (e.g. non uniform beam filling for DPR) sensors are investigated. This cross products characterization acts as a bridge to intercalibrate microwave measurements from the GPM constellation satellites and propagate to the combined and global precipitation estimates. Precipitation features previously used to analyze Level II satellite estimates under various precipitation processes are now intoduced for Level III to test several assumptions in the IMERG algorithm. Specifically, the contribution of Level II is explicitly characterized and a rigorous characterization is performed to migrate across scales fully understanding the propagation of errors from Level II to Level III. Perpectives are presented to advance the use of uncertainty as an integral part of QPE for ground-based and space-borne sensors

  20. Understanding and predicting changing use of groundwater with climate and other uncertainties: a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Costa, F. A. F.; Keir, G.; McIntyre, N.; Bulovic, N.

    2015-12-01

    Most groundwater supply bores in Australia do not have flow metering equipment and so regional groundwater abstraction rates are not well known. Past estimates of unmetered abstraction for regional numerical groundwater modelling typically have not attempted to quantify the uncertainty inherent in the estimation process in detail. In particular, the spatial properties of errors in the estimates are almost always neglected. Here, we apply Bayesian spatial models to estimate these abstractions at a regional scale, using the state-of-the-art computationally inexpensive approaches of integrated nested Laplace approximation (INLA) and stochastic partial differential equations (SPDE). We examine a case study in the Condamine Alluvium aquifer in southern Queensland, Australia; even in this comparatively data-rich area with extensive groundwater abstraction for agricultural irrigation, approximately 80% of bores do not have reliable metered flow records. Additionally, the metering data in this area are characterised by complicated statistical features, such as zero-valued observations, non-normality, and non-stationarity. While this precludes the use of many classical spatial estimation techniques, such as kriging, our model (using the R-INLA package) is able to accommodate these features. We use a joint model to predict both probability and magnitude of abstraction from bores in space and time, and examine the effect of a range of high-resolution gridded meteorological covariates upon the predictive ability of the model. Deviance Information Criterion (DIC) scores are used to assess a range of potential models, which reward good model fit while penalising excessive model complexity. We conclude that maximum air temperature (as a reasonably effective surrogate for evapotranspiration) is the most significant single predictor of abstraction rate; and that a significant spatial effect exists (represented by the SPDE approximation of a Gaussian random field with a Matérn covariance function). Our final model adopts air temperature, solar exposure, and normalized difference vegetation index (NDVI) as covariates, shows good agreement with previous estimates at a regional scale, and additionally offers rigorous quantification of uncertainty in the estimate.

  1. How a European network may help with estimating methane emissions on the French national scale

    NASA Astrophysics Data System (ADS)

    Pison, Isabelle; Berchet, Antoine; Saunois, Marielle; Bousquet, Philippe; Broquet, Grégoire; Conil, Sébastien; Delmotte, Marc; Ganesan, Anita; Laurent, Olivier; Martin, Damien; O'Doherty, Simon; Ramonet, Michel; Spain, T. Gerard; Vermeulen, Alex; Yver Kwok, Camille

    2018-03-01

    Methane emissions on the national scale in France in 2012 are inferred by assimilating continuous atmospheric mixing ratio measurements from nine stations of the European network ICOS located in France and surrounding countries. To assess the robustness of the fluxes deduced by our inversion system based on an objectified quantification of uncertainties, two complementary inversion set-ups are computed and analysed: (i) a regional run correcting for the spatial distribution of fluxes in France and (ii) a sectorial run correcting fluxes for activity sectors on the national scale. In addition, our results for the two set-ups are compared with fluxes produced in the framework of the inversion inter-comparison exercise of the InGOS project. The seasonal variability in fluxes is consistent between different set-ups, with maximum emissions in summer, likely due to agricultural activity. However, very high monthly posterior uncertainties (up to ≈ 65 to 74 % in the sectorial run in May and June) make it difficult to attribute maximum emissions to a specific sector. On the yearly and national scales, the two inversions range from 3835 to 4050 Gg CH4 and from 3570 to 4190 Gg CH4 for the regional and sectorial runs, respectively, consistently with the InGOS products. These estimates are 25 to 55 % higher than the total national emissions from bottom-up approaches (biogeochemical models from natural emissions, plus inventories for anthropogenic ones), consistently pointing at missing or underestimated sources in the inventories and/or in natural sources. More specifically, in the sectorial set-up, agricultural emissions are inferred as 66% larger than estimates reported to the UNFCCC. Uncertainties in the total annual national budget are 108 and 312 Gg CH4, i.e, 3 to 8 %, for the regional and sectorial runs respectively, smaller than uncertainties in available bottom-up products, proving the added value of top-down atmospheric inversions. Therefore, even though the surface network used in 2012 does not allow us to fully constrain all regions in France accurately, a regional inversion set-up makes it possible to provide estimates of French methane fluxes with an uncertainty in the total budget of less than 10 % on the yearly timescale. Additional sites deployed since 2012 would help to constrain French emissions on finer spatial and temporal scales and attributing missing emissions to specific sectors.

  2. Reservoir evaluation of thin-bedded turbidites and hydrocarbon pore thickness estimation for an accurate quantification of resource

    NASA Astrophysics Data System (ADS)

    Omoniyi, Bayonle; Stow, Dorrik

    2016-04-01

    One of the major challenges in the assessment of and production from turbidite reservoirs is to take full account of thin and medium-bedded turbidites (<10cm and <30cm respectively). Although such thinner, low-pay sands may comprise a significant proportion of the reservoir succession, they can go unnoticed by conventional analysis and so negatively impact on reserve estimation, particularly in fields producing from prolific thick-bedded turbidite reservoirs. Field development plans often take little note of such thin beds, which are therefore bypassed by mainstream production. In fact, the trapped and bypassed fluids can be vital where maximising field value and optimising production are key business drivers. We have studied in detail, a succession of thin-bedded turbidites associated with thicker-bedded reservoir facies in the North Brae Field, UKCS, using a combination of conventional logs and cores to assess the significance of thin-bedded turbidites in computing hydrocarbon pore thickness (HPT). This quantity, being an indirect measure of thickness, is critical for an accurate estimation of original-oil-in-place (OOIP). By using a combination of conventional and unconventional logging analysis techniques, we obtain three different results for the reservoir intervals studied. These results include estimated net sand thickness, average sand thickness, and their distribution trend within a 3D structural grid. The net sand thickness varies from 205 to 380 ft, and HPT ranges from 21.53 to 39.90 ft. We observe that an integrated approach (neutron-density cross plots conditioned to cores) to HPT quantification reduces the associated uncertainties significantly, resulting in estimation of 96% of actual HPT. Further work will focus on assessing the 3D dynamic connectivity of the low-pay sands with the surrounding thick-bedded turbidite facies.

  3. Uncertainty quantification for environmental models

    USGS Publications Warehouse

    Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming

    2012-01-01

    Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10]. There are also bootstrapping and cross-validation approaches.Sometimes analyses are conducted using surrogate models [12]. The availability of so many options can be confusing. Categorizing methods based on fundamental questions assists in communicating the essential results of uncertainty analyses to stakeholders. Such questions can focus on model adequacy (e.g., How well does the model reproduce observed system characteristics and dynamics?) and sensitivity analysis (e.g., What parameters can be estimated with available data? What observations are important to parameters and predictions? What parameters are important to predictions?), as well as on the uncertainty quantification (e.g., How accurate and precise are the predictions?). The methods can also be classified by the number of model runs required: few (10s to 1000s) or many (10,000s to 1,000,000s). Of the methods listed above, the most computationally frugal are generally those based on local derivatives; MCMC methods tend to be among the most computationally demanding. Surrogate models (emulators)do not necessarily produce computational frugality because many runs of the full model are generally needed to create a meaningful surrogate model. With this categorization, we can, in general, address all the fundamental questions mentioned above using either computationally frugal or demanding methods. Model development and analysis can thus be conducted consistently using either computation-ally frugal or demanding methods; alternatively, different fundamental questions can be addressed using methods that require different levels of effort. Based on this perspective, we pose the question: Can computationally frugal methods be useful companions to computationally demanding meth-ods? The reliability of computationally frugal methods generally depends on the model being reasonably linear, which usually means smooth nonlin-earities and the assumption of Gaussian errors; both tend to be more valid with more linear

  4. Quantification of uncertainty in aerosol optical thickness retrieval arising from aerosol microphysical model and other sources, applied to Ozone Monitoring Instrument (OMI) measurements

    NASA Astrophysics Data System (ADS)

    Määttä, A.; Laine, M.; Tamminen, J.; Veefkind, J. P.

    2014-05-01

    Satellite instruments are nowadays successfully utilised for measuring atmospheric aerosol in many applications as well as in research. Therefore, there is a growing need for rigorous error characterisation of the measurements. Here, we introduce a methodology for quantifying the uncertainty in the retrieval of aerosol optical thickness (AOT). In particular, we concentrate on two aspects: uncertainty due to aerosol microphysical model selection and uncertainty due to imperfect forward modelling. We apply the introduced methodology for aerosol optical thickness retrieval of the Ozone Monitoring Instrument (OMI) on board NASA's Earth Observing System (EOS) Aura satellite, launched in 2004. We apply statistical methodologies that improve the uncertainty estimates of the aerosol optical thickness retrieval by propagating aerosol microphysical model selection and forward model error more realistically. For the microphysical model selection problem, we utilise Bayesian model selection and model averaging methods. Gaussian processes are utilised to characterise the smooth systematic discrepancies between the measured and modelled reflectances (i.e. residuals). The spectral correlation is composed empirically by exploring a set of residuals. The operational OMI multi-wavelength aerosol retrieval algorithm OMAERO is used for cloud-free, over-land pixels of the OMI instrument with the additional Bayesian model selection and model discrepancy techniques introduced here. The method and improved uncertainty characterisation is demonstrated by several examples with different aerosol properties: weakly absorbing aerosols, forest fires over Greece and Russia, and Sahara desert dust. The statistical methodology presented is general; it is not restricted to this particular satellite retrieval application.

  5. The potential for meta-analysis to support decision analysis in ecology.

    PubMed

    Mengersen, Kerrie; MacNeil, M Aaron; Caley, M Julian

    2015-06-01

    Meta-analysis and decision analysis are underpinned by well-developed methods that are commonly applied to a variety of problems and disciplines. While these two fields have been closely linked in some disciplines such as medicine, comparatively little attention has been paid to the potential benefits of linking them in ecology, despite reasonable expectations that benefits would be derived from doing so. Meta-analysis combines information from multiple studies to provide more accurate parameter estimates and to reduce the uncertainty surrounding them. Decision analysis involves selecting among alternative choices using statistical information that helps to shed light on the uncertainties involved. By linking meta-analysis to decision analysis, improved decisions can be made, with quantification of the costs and benefits of alternate decisions supported by a greater density of information. Here, we briefly review concepts of both meta-analysis and decision analysis, illustrating the natural linkage between them and the benefits from explicitly linking one to the other. We discuss some examples in which this linkage has been exploited in the medical arena and how improvements in precision and reduction of structural uncertainty inherent in a meta-analysis can provide substantive improvements to decision analysis outcomes by reducing uncertainty in expected loss and maximising information from across studies. We then argue that these significant benefits could be translated to ecology, in particular to the problem of making optimal ecological decisions in the face of uncertainty. Copyright © 2013 John Wiley & Sons, Ltd.

  6. Quantification of uncertainties in global grazing systems assessment

    NASA Astrophysics Data System (ADS)

    Fetzel, T.; Havlik, P.; Herrero, M.; Kaplan, J. O.; Kastner, T.; Kroisleitner, C.; Rolinski, S.; Searchinger, T.; Van Bodegom, P. M.; Wirsenius, S.; Erb, K.-H.

    2017-07-01

    Livestock systems play a key role in global sustainability challenges like food security and climate change, yet many unknowns and large uncertainties prevail. We present a systematic, spatially explicit assessment of uncertainties related to grazing intensity (GI), a key metric for assessing ecological impacts of grazing, by combining existing data sets on (a) grazing feed intake, (b) the spatial distribution of livestock, (c) the extent of grazing land, and (d) its net primary productivity (NPP). An analysis of the resulting 96 maps implies that on average 15% of the grazing land NPP is consumed by livestock. GI is low in most of the world's grazing lands, but hotspots of very high GI prevail in 1% of the total grazing area. The agreement between GI maps is good on one fifth of the world's grazing area, while on the remainder, it is low to very low. Largest uncertainties are found in global drylands and where grazing land bears trees (e.g., the Amazon basin or the Taiga belt). In some regions like India or Western Europe, massive uncertainties even result in GI > 100% estimates. Our sensitivity analysis indicates that the input data for NPP, animal distribution, and grazing area contribute about equally to the total variability in GI maps, while grazing feed intake is a less critical variable. We argue that a general improvement in quality of the available global level data sets is a precondition for improving the understanding of the role of livestock systems in the context of global environmental change or food security.

  7. Modeling transport phenomena and uncertainty quantification in solidification processes

    NASA Astrophysics Data System (ADS)

    Fezi, Kyle S.

    Direct chill (DC) casting is the primary processing route for wrought aluminum alloys. This semicontinuous process consists of primary cooling as the metal is pulled through a water cooled mold followed by secondary cooling with a water jet spray and free falling water. To gain insight into this complex solidification process, a fully transient model of DC casting was developed to predict the transport phenomena of aluminum alloys for various conditions. This model is capable of solving mixture mass, momentum, energy, and species conservation equations during multicomponent solidification. Various DC casting process parameters were examined for their effect on transport phenomena predictions in an alloy of commercial interest (aluminum alloy 7050). The practice of placing a wiper to divert cooling water from the ingot surface was studied and the results showed that placement closer to the mold causes remelting at the surface and increases susceptibility to bleed outs. Numerical models of metal alloy solidification, like the one previously mentioned, are used to gain insight into physical phenomena that cannot be observed experimentally. However, uncertainty in model inputs cause uncertainty in results and those insights. The analysis of model assumptions and probable input variability on the level of uncertainty in model predictions has not been calculated in solidification modeling as yet. As a step towards understanding the effect of uncertain inputs on solidification modeling, uncertainty quantification (UQ) and sensitivity analysis were first performed on a transient solidification model of a simple binary alloy (Al-4.5wt.%Cu) in a rectangular cavity with both columnar and equiaxed solid growth models. This analysis was followed by quantifying the uncertainty in predictions from the recently developed transient DC casting model. The PRISM Uncertainty Quantification (PUQ) framework quantified the uncertainty and sensitivity in macrosegregation, solidification time, and sump profile predictions. Uncertain model inputs of interest included the secondary dendrite arm spacing, equiaxed particle size, equiaxed packing fraction, heat transfer coefficient, and material properties. The most influential input parameters for predicting the macrosegregation level were the dendrite arm spacing, which also strongly depended on the choice of mushy zone permeability model, and the equiaxed packing fraction. Additionally, the degree of uncertainty required to produce accurate predictions depended on the output of interest from the model.

  8. Computational Infrastructure for Engine Structural Performance Simulation

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    1997-01-01

    Select computer codes developed over the years to simulate specific aspects of engine structures are described. These codes include blade impact integrated multidisciplinary analysis and optimization, progressive structural fracture, quantification of uncertainties for structural reliability and risk, benefits estimation of new technology insertion and hierarchical simulation of engine structures made from metal matrix and ceramic matrix composites. Collectively these codes constitute a unique infrastructure readiness to credibly evaluate new and future engine structural concepts throughout the development cycle from initial concept, to design and fabrication, to service performance and maintenance and repairs, and to retirement for cause and even to possible recycling. Stated differently, they provide 'virtual' concurrent engineering for engine structures total-life-cycle-cost.

  9. Determination of Vitamin E in Cereal Products and Biscuits by GC-FID.

    PubMed

    Pasias, Ioannis N; Kiriakou, Ioannis K; Papakonstantinou, Lila; Proestos, Charalampos

    2018-01-01

    A rapid, precise and accurate method for the determination of vitamin E (α-tocopherol) in cereal products and biscuits has been developed. The uncertainty was calculated for the first time, and the methods were performed for different cereal products and biscuits, characterized as "superfoods". The limits of detection and quantification were calculated. The accuracy and precision were estimated using the certified reference material FAPAS T10112QC, and the determined values were in good accordance with the certified values. The health claims according to the daily reference values for vitamin E were calculated, and the results proved that the majority of the samples examined showed a percentage daily value higher than 15%.

  10. Determination of Vitamin E in Cereal Products and Biscuits by GC-FID

    PubMed Central

    Kiriakou, Ioannis K.; Papakonstantinou, Lila

    2018-01-01

    A rapid, precise and accurate method for the determination of vitamin E (α-tocopherol) in cereal products and biscuits has been developed. The uncertainty was calculated for the first time, and the methods were performed for different cereal products and biscuits, characterized as “superfoods”. The limits of detection and quantification were calculated. The accuracy and precision were estimated using the certified reference material FAPAS T10112QC, and the determined values were in good accordance with the certified values. The health claims according to the daily reference values for vitamin E were calculated, and the results proved that the majority of the samples examined showed a percentage daily value higher than 15%. PMID:29301245

  11. Exploring prediction uncertainty of spatial data in geostatistical and machine learning Approaches

    NASA Astrophysics Data System (ADS)

    Klump, J. F.; Fouedjio, F.

    2017-12-01

    Geostatistical methods such as kriging with external drift as well as machine learning techniques such as quantile regression forest have been intensively used for modelling spatial data. In addition to providing predictions for target variables, both approaches are able to deliver a quantification of the uncertainty associated with the prediction at a target location. Geostatistical approaches are, by essence, adequate for providing such prediction uncertainties and their behaviour is well understood. However, they often require significant data pre-processing and rely on assumptions that are rarely met in practice. Machine learning algorithms such as random forest regression, on the other hand, require less data pre-processing and are non-parametric. This makes the application of machine learning algorithms to geostatistical problems an attractive proposition. The objective of this study is to compare kriging with external drift and quantile regression forest with respect to their ability to deliver reliable prediction uncertainties of spatial data. In our comparison we use both simulated and real world datasets. Apart from classical performance indicators, comparisons make use of accuracy plots, probability interval width plots, and the visual examinations of the uncertainty maps provided by the two approaches. By comparing random forest regression to kriging we found that both methods produced comparable maps of estimated values for our variables of interest. However, the measure of uncertainty provided by random forest seems to be quite different to the measure of uncertainty provided by kriging. In particular, the lack of spatial context can give misleading results in areas without ground truth data. These preliminary results raise questions about assessing the risks associated with decisions based on the predictions from geostatistical and machine learning algorithms in a spatial context, e.g. mineral exploration.

  12. Quantification of key long-term risks at CO₂ sequestration sites: Latest results from US DOE's National Risk Assessment Partnership (NRAP) Project

    DOE PAGES

    Pawar, Rajesh; Bromhal, Grant; Carroll, Susan; ...

    2014-12-31

    Risk assessment for geologic CO₂ storage including quantification of risks is an area of active investigation. The National Risk Assessment Partnership (NRAP) is a US-Department of Energy (US-DOE) effort focused on developing a defensible, science-based methodology and platform for quantifying risk profiles at geologic CO₂ sequestration sites. NRAP has been developing a methodology that centers round development of an integrated assessment model (IAM) using system modeling approach to quantify risks and risk profiles. The IAM has been used to calculate risk profiles with a few key potential impacts due to potential CO₂ and brine leakage. The simulation results are alsomore » used to determine long-term storage security relationships and compare the long-term storage effectiveness to IPCC storage permanence goal. Additionally, we also demonstrate application of IAM for uncertainty quantification in order to determine parameters to which the uncertainty in model results is most sensitive.« less

  13. Regression without truth with Markov chain Monte-Carlo

    NASA Astrophysics Data System (ADS)

    Madan, Hennadii; Pernuš, Franjo; Likar, Boštjan; Å piclin, Žiga

    2017-03-01

    Regression without truth (RWT) is a statistical technique for estimating error model parameters of each method in a group of methods used for measurement of a certain quantity. A very attractive aspect of RWT is that it does not rely on a reference method or "gold standard" data, which is otherwise difficult RWT was used for a reference-free performance comparison of several methods for measuring left ventricular ejection fraction (EF), i.e. a percentage of blood leaving the ventricle each time the heart contracts, and has since been applied for various other quantitative imaging biomarkerss (QIBs). Herein, we show how Markov chain Monte-Carlo (MCMC), a computational technique for drawing samples from a statistical distribution with probability density function known only up to a normalizing coefficient, can be used to augment RWT to gain a number of important benefits compared to the original approach based on iterative optimization. For instance, the proposed MCMC-based RWT enables the estimation of joint posterior distribution of the parameters of the error model, straightforward quantification of uncertainty of the estimates, estimation of true value of the measurand and corresponding credible intervals (CIs), does not require a finite support for prior distribution of the measureand generally has a much improved robustness against convergence to non-global maxima. The proposed approach is validated using synthetic data that emulate the EF data for 45 patients measured with 8 different methods. The obtained results show that 90% CI of the corresponding parameter estimates contain the true values of all error model parameters and the measurand. A potential real-world application is to take measurements of a certain QIB several different methods and then use the proposed framework to compute the estimates of the true values and their uncertainty, a vital information for diagnosis based on QIB.

  14. Investigation of Biotransport in a Tumor With Uncertain Material Properties Using a Nonintrusive Spectral Uncertainty Quantification Method.

    PubMed

    Alexanderian, Alen; Zhu, Liang; Salloum, Maher; Ma, Ronghui; Yu, Meilin

    2017-09-01

    In this study, statistical models are developed for modeling uncertain heterogeneous permeability and porosity in tumors, and the resulting uncertainties in pressure and velocity fields during an intratumoral injection are quantified using a nonintrusive spectral uncertainty quantification (UQ) method. Specifically, the uncertain permeability is modeled as a log-Gaussian random field, represented using a truncated Karhunen-Lòeve (KL) expansion, and the uncertain porosity is modeled as a log-normal random variable. The efficacy of the developed statistical models is validated by simulating the concentration fields with permeability and porosity of different uncertainty levels. The irregularity in the concentration field bears reasonable visual agreement with that in MicroCT images from experiments. The pressure and velocity fields are represented using polynomial chaos (PC) expansions to enable efficient computation of their statistical properties. The coefficients in the PC expansion are computed using a nonintrusive spectral projection method with the Smolyak sparse quadrature. The developed UQ approach is then used to quantify the uncertainties in the random pressure and velocity fields. A global sensitivity analysis is also performed to assess the contribution of individual KL modes of the log-permeability field to the total variance of the pressure field. It is demonstrated that the developed UQ approach can effectively quantify the flow uncertainties induced by uncertain material properties of the tumor.

  15. Decomposition Technique for Remaining Useful Life Prediction

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor); Saxena, Abhinav (Inventor); Celaya, Jose R. (Inventor)

    2014-01-01

    The prognostic tool disclosed here decomposes the problem of estimating the remaining useful life (RUL) of a component or sub-system into two separate regression problems: the feature-to-damage mapping and the operational conditions-to-damage-rate mapping. These maps are initially generated in off-line mode. One or more regression algorithms are used to generate each of these maps from measurements (and features derived from these), operational conditions, and ground truth information. This decomposition technique allows for the explicit quantification and management of different sources of uncertainty present in the process. Next, the maps are used in an on-line mode where run-time data (sensor measurements and operational conditions) are used in conjunction with the maps generated in off-line mode to estimate both current damage state as well as future damage accumulation. Remaining life is computed by subtracting the instance when the extrapolated damage reaches the failure threshold from the instance when the prediction is made.

  16. Calibrating Physical Parameters in House Models Using Aggregate AC Power Demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Stevens, Andrew J.; Lian, Jianming

    For residential houses, the air conditioning (AC) units are one of the major resources that can provide significant flexibility in energy use for the purpose of demand response. To quantify the flexibility, the characteristics of all the houses need to be accurately estimated, so that certain house models can be used to predict the dynamics of the house temperatures in order to adjust the setpoints accordingly to provide demand response while maintaining the same comfort levels. In this paper, we propose an approach using the Reverse Monte Carlo modeling method and aggregate house models to calibrate the distribution parameters ofmore » the house models for a population of residential houses. Given the aggregate AC power demand for the population, the approach can successfully estimate the distribution parameters for the sensitive physical parameters based on our previous uncertainty quantification study, such as the mean of the floor areas of the houses.« less

  17. A frequentist approach to computer model calibration

    DOE PAGES

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    2016-05-05

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Raymond K. W.; Storlie, Curtis Byron; Lee, Thomas C. M.

    The paper considers the computer model calibration problem and provides a general frequentist solution. Under the framework proposed, the data model is semiparametric with a non-parametric discrepancy function which accounts for any discrepancy between physical reality and the computer model. In an attempt to solve a fundamentally important (but often ignored) identifiability issue between the computer model parameters and the discrepancy function, the paper proposes a new and identifiable parameterization of the calibration problem. It also develops a two-step procedure for estimating all the relevant quantities under the new parameterization. This estimation procedure is shown to enjoy excellent rates ofmore » convergence and can be straightforwardly implemented with existing software. For uncertainty quantification, bootstrapping is adopted to construct confidence regions for the quantities of interest. As a result, the practical performance of the methodology is illustrated through simulation examples and an application to a computational fluid dynamics model.« less

  19. Methodological challenges for the evaluation of clinical effectiveness in the context of accelerated regulatory approval: an overview.

    PubMed

    Woolacott, Nerys; Corbett, Mark; Jones-Diette, Julie; Hodgson, Robert

    2017-10-01

    Regulatory authorities are approving innovative therapies with limited evidence. Although this level of data is sufficient for the regulator to establish an acceptable risk-benefit balance, it is problematic for downstream health technology assessment, where assessment of cost-effectiveness requires reliable estimates of effectiveness relative to existing clinical practice. Some key issues associated with a limited evidence base include using data, from nonrandomized studies, from small single-arm trials, or from single-center trials; and using surrogate end points. We examined these methodological challenges through a pragmatic review of the available literature. Methods to adjust nonrandomized studies for confounding are imperfect. The relative treatment effect generated from single-arm trials is uncertain and may be optimistic. Single-center trial results may not be generalizable. Surrogate end points, on average, overestimate treatment effects. Current methods for analyzing such data are limited, and effectiveness claims based on these suboptimal forms of evidence are likely to be subject to significant uncertainty. Assessments of cost-effectiveness, based on the modeling of such data, are likely to be subject to considerable uncertainty. This uncertainty must not be underestimated by decision makers: methods for its quantification are required and schemes to protect payers from the cost of uncertainty should be implemented. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.

  20. Multi-fidelity numerical simulations of shock/turbulent-boundary layer interaction with uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Bermejo-Moreno, Ivan; Campo, Laura; Larsson, Johan; Emory, Mike; Bodart, Julien; Palacios, Francisco; Iaccarino, Gianluca; Eaton, John

    2013-11-01

    We study the interaction between an oblique shock wave and the turbulent boundary layers inside a nearly-square duct by combining wall-modeled LES, 2D and 3D RANS simulations, targeting the experiment of Campo, Helmer & Eaton, 2012 (nominal conditions: M = 2 . 05 , Reθ = 6 , 500). A primary objective is to quantify the effect of aleatory and epistemic uncertainties on the STBLI. Aleatory uncertainties considered include the inflow conditions (Mach number of the incoming air stream and thickness of the boundary layers) and perturbations of the duct geometry upstream of the interaction. The epistemic uncertainty under consideration focuses on the RANS turbulence model form by injecting perturbations in the Reynolds stress anisotropy in regions of the flow where the model assumptions (in particular, the Boussinesq eddy-viscosity hypothesis) may be invalid. These perturbations are then propagated through the flow solver into the solution. The uncertainty quantification (UQ) analysis is done through 2D and 3D RANS simulations, assessing the importance of the three-dimensional effects imposed by the nearly-square duct geometry. Wall-modeled LES are used to verify elements of the UQ methodology and to explore the flow features and physics of the STBLI for multiple shock strengths. Financial support from the United States Department of Energy under the PSAAP program is gratefully acknowledged.

  1. Estimation of river and stream temperature trends under haphazard sampling

    USGS Publications Warehouse

    Gray, Brian R.; Lyubchich, Vyacheslav; Gel, Yulia R.; Rogala, James T.; Robertson, Dale M.; Wei, Xiaoqiao

    2015-01-01

    Long-term temporal trends in water temperature in rivers and streams are typically estimated under the assumption of evenly-spaced space-time measurements. However, sampling times and dates associated with historical water temperature datasets and some sampling designs may be haphazard. As a result, trends in temperature may be confounded with trends in time or space of sampling which, in turn, may yield biased trend estimators and thus unreliable conclusions. We address this concern using multilevel (hierarchical) linear models, where time effects are allowed to vary randomly by day and date effects by year. We evaluate the proposed approach by Monte Carlo simulations with imbalance, sparse data and confounding by trend in time and date of sampling. Simulation results indicate unbiased trend estimators while results from a case study of temperature data from the Illinois River, USA conform to river thermal assumptions. We also propose a new nonparametric bootstrap inference on multilevel models that allows for a relatively flexible and distribution-free quantification of uncertainties. The proposed multilevel modeling approach may be elaborated to accommodate nonlinearities within days and years when sampling times or dates typically span temperature extremes.

  2. Effects of contemporary land-use and land-cover change on the carbon balance of terrestrial ecosystems in the United States

    USGS Publications Warehouse

    Sleeter, Benjamin M.; Liu, Jinxun; Daniel, Colin; Rayfield, Bronwyn; Sherba, Jason; Hawbaker, Todd J.; Zhu, Zhiliang; Selmants, Paul; Loveland, Thomas R.

    2018-01-01

    Changes in land use and land cover (LULC) can have profound effects on terrestrial carbon dynamics, yet their effects on the global carbon budget remain uncertain. While land change impacts on ecosystem carbon dynamics have been the focus of numerous studies, few efforts have been based on observational data incorporating multiple ecosystem types spanning large geographic areas over long time horizons. In this study we use a variety of synoptic-scale remote sensing data to estimate the effect of LULC changes associated with urbanization, agricultural expansion and contraction, forest harvest, and wildfire on the carbon balance of terrestrial ecosystems (forest, grasslands, shrublands, and agriculture) in the conterminous United States (i.e. excluding Alaska and Hawaii) between 1973 and 2010. We estimate large net declines in the area of agriculture and forest, along with relatively small increases in grasslands and shrublands. The largest net change in any class was an estimated gain of 114 865 km2 of developed lands, an average rate of 3282 km2 yr−1. On average, US ecosystems sequestered carbon at an annual rate of 254 Tg C yr−1. In forest lands, the net sink declined by 35% over the study period, largely a result of land-use legacy, increasing disturbances, and reductions in forest area due to land use conversion. Uncertainty in LULC change data contributed to a ~16% margin of error in the annual carbon sink estimate prior to 1985 (approximately ±40 Tg C yr−1). Improvements in LULC and disturbance mapping starting in the mid-1980s reduced this uncertainty by ~50% after 1985. We conclude that changes in LULC are a critical component to understanding ecosystem carbon dynamics, and continued improvements in detection, quantification, and attribution of change have the potential to significantly reduce current uncertainties.

  3. Effects of contemporary land-use and land-cover change on the carbon balance of terrestrial ecosystems in the United States

    NASA Astrophysics Data System (ADS)

    Sleeter, Benjamin M.; Liu, Jinxun; Daniel, Colin; Rayfield, Bronwyn; Sherba, Jason; Hawbaker, Todd J.; Zhu, Zhiliang; Selmants, Paul C.; Loveland, Thomas R.

    2018-04-01

    Changes in land use and land cover (LULC) can have profound effects on terrestrial carbon dynamics, yet their effects on the global carbon budget remain uncertain. While land change impacts on ecosystem carbon dynamics have been the focus of numerous studies, few efforts have been based on observational data incorporating multiple ecosystem types spanning large geographic areas over long time horizons. In this study we use a variety of synoptic-scale remote sensing data to estimate the effect of LULC changes associated with urbanization, agricultural expansion and contraction, forest harvest, and wildfire on the carbon balance of terrestrial ecosystems (forest, grasslands, shrublands, and agriculture) in the conterminous United States (i.e. excluding Alaska and Hawaii) between 1973 and 2010. We estimate large net declines in the area of agriculture and forest, along with relatively small increases in grasslands and shrublands. The largest net change in any class was an estimated gain of 114 865 km2 of developed lands, an average rate of 3282 km2 yr‑1. On average, US ecosystems sequestered carbon at an annual rate of 254 Tg C yr‑1. In forest lands, the net sink declined by 35% over the study period, largely a result of land-use legacy, increasing disturbances, and reductions in forest area due to land use conversion. Uncertainty in LULC change data contributed to a ~16% margin of error in the annual carbon sink estimate prior to 1985 (approximately ±40 Tg C yr‑1). Improvements in LULC and disturbance mapping starting in the mid-1980s reduced this uncertainty by ~50% after 1985. We conclude that changes in LULC are a critical component to understanding ecosystem carbon dynamics, and continued improvements in detection, quantification, and attribution of change have the potential to significantly reduce current uncertainties.

  4. Disdrometer-based C-Band Radar Quantitative Precipitation Estimation (QPE) in a highly complex terrain region in tropical Colombia.

    NASA Astrophysics Data System (ADS)

    Sepúlveda, J.; Hoyos Ortiz, C. D.

    2017-12-01

    An adequate quantification of precipitation over land is critical for many societal applications including agriculture, hydroelectricity generation, water supply, and risk management associated with extreme events. The use of rain gauges, a traditional method for precipitation estimation, and an excellent one, to estimate the volume of liquid water during a particular precipitation event, does not allow to fully capture the highly spatial variability of the phenomena which is a requirement for almost all practical applications. On the other hand, the weather radar, an active remote sensing sensor, provides a proxy for rainfall with fine spatial resolution and adequate temporary sampling, however, it does not measure surface precipitation. In order to fully exploit the capabilities of the weather radar, it is necessary to develop quantitative precipitation estimation (QPE) techniques combining radar information with in-situ measurements. Different QPE methodologies are explored and adapted to local observations in a highly complex terrain region in tropical Colombia using a C-Band radar and a relatively dense network of rain gauges and disdrometers. One important result is that the expressions reported in the literature for extratropical locations are not representative of the conditions found in the tropical region studied. In addition to reproducing the state-of-the-art techniques, a new multi-stage methodology based on radar-derived variables and disdrometer data is proposed in order to achieve the best QPE possible. The main motivation for this new methodology is based on the fact that most traditional QPE methods do not directly take into account the different uncertainty sources involved in the process. The main advantage of the multi-stage model compared to traditional models is that it allows assessing and quantifying the uncertainty in the surface rain rate estimation. The sub-hourly rainfall estimations using the multi-stage methodology are realistic compared to observed data in spite of the many sources of uncertainty including the sampling volume, the different physical principles of the sensors, the incomplete understanding of the microphysics of precipitation and, the most important, the rapidly varying droplet size distribution.

  5. Plasticity models of material variability based on uncertainty quantification techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Reese E.; Rizzi, Francesco; Boyce, Brad

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQmore » techniques can be used in model selection and assessing the quality of calibrated physical parameters.« less

  6. Quantification of agricultural drought occurrence as an estimate for insurance programs

    NASA Astrophysics Data System (ADS)

    Bannayan, M.; Hoogenboom, G.

    2015-11-01

    Temporal irregularities of rainfall and drought have major impacts on rainfed cropping systems. The main goal of this study was to develop an approach for realizing drought occurrence based on local winter wheat yield loss and rainfall. The domain study included 11 counties in the state of Washington that actively grow rainfed winter wheat and an uncertainty rainfall evaluation model using daily rainfall values from 1985 to 2007. An application was developed that calculates a rainfall index for insurance that was then used to determine the drought intensity for each study year and for each study site. Evaluation of the drought intensity showed that both the 1999-2000 and 2000-2001 growing seasons were stressful years for most of the study locations, while the 2005-2006 and the 2006-2007 growing seasons experienced the lowest drought intensity for all locations. Our results are consistent with local extension reports of drought occurrences. Quantification of drought intensity based on this application could provide a convenient index for insurance companies for determining the effect of rainfall and drought on crop yield loss under the varying weather conditions of semi-arid regions.

  7. Land grabbing: a preliminary quantification of economic impacts on rural livelihoods.

    PubMed

    Davis, Kyle F; D'Odorico, Paolo; Rulli, Maria Cristina

    2014-01-01

    Global demands on agricultural land are increasing due to population growth, dietary changes and the use of biofuels. Their effect on food security is to reduce humans' ability to cope with the uncertainties of global climate change. In light of the 2008 food crisis, to secure reliable future access to sufficient agricultural land, many nations and corporations have begun purchasing large tracts of land in the global South, a phenomenon deemed "land grabbing" by popular media. Because land investors frequently export crops without providing adequate employment, this represents an effective income loss for local communities. We study 28 countries targeted by large-scale land acquisitions [comprising 87 % of reported cases and 27 million hectares (ha)] and estimate the effects of such investments on local communities' incomes. We find that this phenomenon can potentially affect the incomes of ~12 million people globally with implications for food security, poverty levels and urbanization. While it is important to note that our study incorporates a number of assumptions and limitations, it provides a much needed initial quantification of the economic impacts of large-scale land acquisitions on rural livelihoods.

  8. A Fatigue Crack Size Evaluation Method Based on Lamb Wave Simulation and Limited Experimental Data

    PubMed Central

    He, Jingjing; Ran, Yunmeng; Liu, Bin; Yang, Jinsong; Guan, Xuefei

    2017-01-01

    This paper presents a systematic and general method for Lamb wave-based crack size quantification using finite element simulations and Bayesian updating. The method consists of construction of a baseline quantification model using finite element simulation data and Bayesian updating with limited Lamb wave data from target structure. The baseline model correlates two proposed damage sensitive features, namely the normalized amplitude and phase change, with the crack length through a response surface model. The two damage sensitive features are extracted from the first received S0 mode wave package. The model parameters of the baseline model are estimated using finite element simulation data. To account for uncertainties from numerical modeling, geometry, material and manufacturing between the baseline model and the target model, Bayesian method is employed to update the baseline model with a few measurements acquired from the actual target structure. A rigorous validation is made using in-situ fatigue testing and Lamb wave data from coupon specimens and realistic lap-joint components. The effectiveness and accuracy of the proposed method is demonstrated under different loading and damage conditions. PMID:28902148

  9. Quantification and propagation of disciplinary uncertainty via Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Mantis, George Constantine

    2002-08-01

    Several needs exist in the military, commercial, and civil sectors for new hypersonic systems. These needs remain unfulfilled, due in part to the uncertainty encountered in designing these systems. This uncertainty takes a number of forms, including disciplinary uncertainty, that which is inherent in the analytical tools utilized during the design process. Yet, few efforts to date empower the designer with the means to account for this uncertainty within the disciplinary analyses. In the current state-of-the-art in design, the effects of this unquantifiable uncertainty significantly increase the risks associated with new design efforts. Typically, the risk proves too great to allow a given design to proceed beyond the conceptual stage. To that end, the research encompasses the formulation and validation of a new design method, a systematic process for probabilistically assessing the impact of disciplinary uncertainty. The method implements Bayesian Statistics theory to quantify this source of uncertainty, and propagate its effects to the vehicle system level. Comparison of analytical and physical data for existing systems, modeled a priori in the given analysis tools, leads to quantification of uncertainty in those tools' calculation of discipline-level metrics. Then, after exploration of the new vehicle's design space, the quantified uncertainty is propagated probabilistically through the design space. This ultimately results in the assessment of the impact of disciplinary uncertainty on the confidence in the design solution: the final shape and variability of the probability functions defining the vehicle's system-level metrics. Although motivated by the hypersonic regime, the proposed treatment of uncertainty applies to any class of aerospace vehicle, just as the problem itself affects the design process of any vehicle. A number of computer programs comprise the environment constructed for the implementation of this work. Application to a single-stage-to-orbit (SSTO) reusable launch vehicle concept, developed by the NASA Langley Research Center under the Space Launch Initiative, provides the validation case for this work, with the focus placed on economics, aerothermodynamics, propulsion, and structures metrics. (Abstract shortened by UMI.)

  10. Quantification of the contribution of nitrogen from septic tanks to ground water in Spanish Springs Valley, Nevada

    USGS Publications Warehouse

    Rosen, Michael R.; Kropf, Christian; Thomas, Karen A.

    2006-01-01

    Analysis of total dissolved nitrogen concentrations from soil water samples collected within the soil zone under septic tank leach fields in Spanish Springs Valley, Nevada, shows a median concentration of approximately 44 milligrams per liter (mg/L) from more than 300 measurements taken from four septic tank systems. Using two simple mass balance calculations, the concentration of total dissolved nitrogen potentially reaching the ground-water table ranges from 25 to 29 mg/L. This indicates that approximately 29 to 32 metric tons of nitrogen enters the aquifer every year from natural recharge and from the 2,070 houses that use septic tanks in the densely populated portion of Spanish Springs Valley. Natural recharge contributes only 0.25 metric tons because the total dissolved nitrogen concentration of natural recharge was estimated to be low (0.8 mg/L). Although there are many uncertainties in this estimate, the sensitivity of these uncertainties to the calculated load is relatively small, indicating that these values likely are accurate to within an order of magnitude. The nitrogen load calculation will be used as an input function for a ground-water flow and transport model that will be used to test management options for controlling nitrogen contamination in the basin.

  11. Evaluation of Global Observations-Based Evapotranspiration Datasets and IPCC AR4 Simulations

    NASA Technical Reports Server (NTRS)

    Mueller, B.; Seneviratne, S. I.; Jimenez, C.; Corti, T.; Hirschi, M.; Balsamo, G.; Ciais, P.; Dirmeyer, P.; Fisher, J. B.; Guo, Z.; hide

    2011-01-01

    Quantification of global land evapotranspiration (ET) has long been associated with large uncertainties due to the lack of reference observations. Several recently developed products now provide the capacity to estimate ET at global scales. These products, partly based on observational data, include satellite ]based products, land surface model (LSM) simulations, atmospheric reanalysis output, estimates based on empirical upscaling of eddycovariance flux measurements, and atmospheric water balance datasets. The LandFlux-EVAL project aims to evaluate and compare these newly developed datasets. Additionally, an evaluation of IPCC AR4 global climate model (GCM) simulations is presented, providing an assessment of their capacity to reproduce flux behavior relative to the observations ]based products. Though differently constrained with observations, the analyzed reference datasets display similar large-scale ET patterns. ET from the IPCC AR4 simulations was significantly smaller than that from the other products for India (up to 1 mm/d) and parts of eastern South America, and larger in the western USA, Australia and China. The inter-product variance is lower across the IPCC AR4 simulations than across the reference datasets in several regions, which indicates that uncertainties may be underestimated in the IPCC AR4 models due to shared biases of these simulations.

  12. Estimating the rates of mass change, ice volume change and snow volume change in Greenland from ICESat and GRACE data

    NASA Astrophysics Data System (ADS)

    Slobbe, D. C.; Ditmar, P.; Lindenbergh, R. C.

    2009-01-01

    The focus of this paper is on the quantification of ongoing mass and volume changes over the Greenland ice sheet. For that purpose, we used elevation changes derived from the Ice, Cloud, and land Elevation Satellite (ICESat) laser altimetry mission and monthly variations of the Earth's gravity field as observed by the Gravity Recovery and Climate Experiment (GRACE) mission. Based on a stand alone processing scheme of ICESat data, the most probable estimate of the mass change rate from 2003 February to 2007 April equals -139 +/- 68 Gtonyr-1. Here, we used a density of 600+/-300 kgm-3 to convert the estimated elevation change rate in the region above 2000m into a mass change rate. For the region below 2000m, we used a density of 900+/-300 kgm-3. Based on GRACE gravity models from half 2002 to half 2007 as processed by CNES, CSR, DEOS and GFZ, the estimated mass change rate for the whole of Greenland ranges between -128 and -218Gtonyr-1. Most GRACE solutions show much stronger mass losses as obtained with ICESat, which might be related to a local undersampling of the mass loss by ICESat and uncertainties in the used snow/ice densities. To solve the problem of uncertainties in the snow and ice densities, two independent joint inversion concepts are proposed to profit from both GRACE and ICESat observations simultaneously. The first concept, developed to reduce the uncertainty of the mass change rate, estimates this rate in combination with an effective snow/ice density. However, it turns out that the uncertainties are not reduced, which is probably caused by the unrealistic assumption that the effective density is constant in space and time. The second concept is designed to convert GRACE and ICESat data into two totally new products: variations of ice volume and variations of snow volume separately. Such an approach is expected to lead to new insights in ongoing mass change processes over the Greenland ice sheet. Our results show for different GRACE solutions a snow volume change of -11 to 155km3yr-1 and an ice loss with a rate of -136 to -292km3yr-1.

  13. Rainfall recharge estimation on a nation-wide scale using satellite information in New Zealand

    NASA Astrophysics Data System (ADS)

    Westerhoff, Rogier; White, Paul; Moore, Catherine

    2015-04-01

    Models of rainfall recharge to groundwater are challenged by the need to combine uncertain estimates of rainfall, evapotranspiration, terrain slope, and unsaturated zone parameters (e.g., soil drainage and hydraulic conductivity of the subsurface). Therefore, rainfall recharge is easiest to estimate on a local scale in well-drained plains, where it is known that rainfall directly recharges groundwater. In New Zealand, this simplified approach works in the policy framework of regional councils, who manage water allocation at the aquifer and sub-catchment scales. However, a consistent overview of rainfall recharge is difficult to obtain at catchment and national scale: in addition to data uncertainties, data formats are inconsistent between catchments; the density of ground observations, where these exist, differs across regions; each region typically uses different local models for estimating recharge components; and different methods and ground observations are used for calibration and validation of these models. The research described in this paper therefore presents a nation-wide approach to estimate rainfall recharge in New Zealand. The method used is a soil water balance approach, with input data from national rainfall and soil and geology databases. Satellite data (i.e., evapotranspiration, soil moisture, and terrain) aid in the improved calculation of rainfall recharge, especially in data-sparse areas. A first version of the model has been implemented on a 1 km x 1 km and monthly scale between 2000 and 2013. A further version will include a quantification of recharge estimate uncertainty: with both "top down" input error propagation methods and catchment-wide "bottom up" assessments of integrated uncertainty being adopted. Using one nation-wide methodology opens up new possibilities: it can, for example, help in more consistent estimation of water budgets, groundwater fluxes, or other hydrological parameters. Since recharge is estimated for the entire land surface, and not only the known aquifers, the model also identifies other zones that could potentially recharge aquifers, including large areas (e.g., mountains) that are currently regarded as impervious. The resulting rainfall recharge data have also been downscaled in a 200 m x 200 m calculation of a national monthly water table. This will lead to better estimation of hydraulic conductivity, which holds considerable potential for further research in unconfined aquifers in New Zealand.

  14. Quantification of submarine groundwater discharge and its short-term dynamics by linking time-variant end-member mixing analysis and isotope mass balancing (222-Rn)

    NASA Astrophysics Data System (ADS)

    Petermann, Eric; Knöller, Kay; Stollberg, Reiner; Scholten, Jan; Rocha, Carlos; Weiß, Holger; Schubert, Michael

    2017-04-01

    Submarine groundwater discharge (SGD) plays a crucial role for the water quality of coastal waters due to associated fluxes of nutrients, organic compounds and/or heavy-metals. Thus, the quantification of SGD is essential for evaluating the vulnerability of coastal water bodies with regard to groundwater pollution as well as for understanding the matter cycles of the connected water bodies. Here, we present a scientific approach for quantifying discharge of fresh groundwater (GWf) and recirculated seawater (SWrec), including its short-term temporal dynamics, into the tide-affected Knysna estuary, South Africa. For a time-variant end-member mixing analysis we conducted time-series observations of radon (222Rn) and salinity within the estuary over two tidal cycles in combination with estimates of the related end-members for seawater, river water, GWf and SWrec. The mixing analysis was treated as constrained optimization problem for finding an end-member mixing ratio that simultaneously fits the observed data for radon and salinity best for every time-step. Uncertainty of each mixing ratio was quantified by Monte Carlo simulations of the optimization procedure considering uncertainty in end-member characterization. Results reveal the highest GWf and SWrec fraction in the estuary during peak low tide with averages of 0.8 % and 1.4 %, respectively. Further, we calculated a radon mass balance that revealed a daily radon flux of 4.8 * 108 Bq into the estuary equivalent to a GWf discharge of 29.000 m3/d (9.000-59.000 m3/d for 25th-75th percentile range) and a SWrec discharge of 80.000 m3/d (45.000-130.000 m3/d for 25th-75th percentile range). The uncertainty of SGD reflects the end-member uncertainty, i.e. the spatial heterogeneity of groundwater composition. The presented approach allows the calculation of mixing ratios of multiple uncertain end-members for time-series measurements of multiple parameters. Linking these results with a tracer mass balance allows conversion of end-member fractions to end-member fluxes.

  15. Special Issue on Uncertainty Quantification in Multiscale System Design and Simulation

    DOE PAGES

    Wang, Yan; Swiler, Laura

    2017-09-07

    The importance of uncertainty has been recognized in various modeling, simulation, and analysis applications, where inherent assumptions and simplifications affect the accuracy of model predictions for physical phenomena. As model predictions are now heavily relied upon for simulation-based system design, which includes new materials, vehicles, mechanical and civil structures, and even new drugs, wrong model predictions could potentially cause catastrophic consequences. Therefore, uncertainty and associated risks due to model errors should be quantified to support robust systems engineering.

  16. Special Issue on Uncertainty Quantification in Multiscale System Design and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yan; Swiler, Laura

    The importance of uncertainty has been recognized in various modeling, simulation, and analysis applications, where inherent assumptions and simplifications affect the accuracy of model predictions for physical phenomena. As model predictions are now heavily relied upon for simulation-based system design, which includes new materials, vehicles, mechanical and civil structures, and even new drugs, wrong model predictions could potentially cause catastrophic consequences. Therefore, uncertainty and associated risks due to model errors should be quantified to support robust systems engineering.

  17. Practical uncertainty reduction and quantification in shock physics measurements

    DOE PAGES

    Akin, M. C.; Nguyen, J. H.

    2015-04-20

    We report the development of a simple error analysis sampling method for identifying intersections and inflection points to reduce total uncertainty in experimental data. This technique was used to reduce uncertainties in sound speed measurements by 80% over conventional methods. Here, we focused on its impact on a previously published set of Mo sound speed data and possible implications for phase transition and geophysical studies. However, this technique's application can be extended to a wide range of experimental data.

  18. Simulation Credibility: Advances in Verification, Validation, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Mehta, Unmeel B. (Editor); Eklund, Dean R.; Romero, Vicente J.; Pearce, Jeffrey A.; Keim, Nicholas S.

    2016-01-01

    Decision makers and other users of simulations need to know quantified simulation credibility to make simulation-based critical decisions and effectively use simulations, respectively. The credibility of a simulation is quantified by its accuracy in terms of uncertainty, and the responsibility of establishing credibility lies with the creator of the simulation. In this volume, we present some state-of-the-art philosophies, principles, and frameworks. The contributing authors involved in this publication have been dedicated to advancing simulation credibility. They detail and provide examples of key advances over the last 10 years in the processes used to quantify simulation credibility: verification, validation, and uncertainty quantification. The philosophies and assessment methods presented here are anticipated to be useful to other technical communities conducting continuum physics-based simulations; for example, issues related to the establishment of simulation credibility in the discipline of propulsion are discussed. We envision that simulation creators will find this volume very useful to guide and assist them in quantitatively conveying the credibility of their simulations.

  19. Proof-of-Concept Study for Uncertainty Quantification and Sensitivity Analysis using the BRL Shaped-Charge Example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Justin Matthew

    These are the slides for a graduate presentation at Mississippi State University. It covers the following: the BRL Shaped-Charge Geometry in PAGOSA, mesh refinement study, surrogate modeling using a radial basis function network (RBFN), ruling out parameters using sensitivity analysis (equation of state study), uncertainty quantification (UQ) methodology, and sensitivity analysis (SA) methodology. In summary, a mesh convergence study was used to ensure that solutions were numerically stable by comparing PDV data between simulations. A Design of Experiments (DOE) method was used to reduce the simulation space to study the effects of the Jones-Wilkins-Lee (JWL) Parameters for the Composition Bmore » main charge. Uncertainty was quantified by computing the 95% data range about the median of simulation output using a brute force Monte Carlo (MC) random sampling method. Parameter sensitivities were quantified using the Fourier Amplitude Sensitivity Test (FAST) spectral analysis method where it was determined that detonation velocity, initial density, C1, and B1 controlled jet tip velocity.« less

  20. Leak Rate Quantification Method for Gas Pressure Seals with Controlled Pressure Differential

    NASA Technical Reports Server (NTRS)

    Daniels, Christopher C.; Braun, Minel J.; Oravec, Heather A.; Mather, Janice L.; Taylor, Shawn C.

    2015-01-01

    An enhancement to the pressure decay leak rate method with mass point analysis solved deficiencies in the standard method. By adding a control system, a constant gas pressure differential across the test article was maintained. As a result, the desired pressure condition was met at the onset of the test, and the mass leak rate and measurement uncertainty were computed in real-time. The data acquisition and control system were programmed to automatically stop when specified criteria were met. Typically, the test was stopped when a specified level of measurement uncertainty was attained. Using silicone O-ring test articles, the new method was compared with the standard method that permitted the downstream pressure to be non-constant atmospheric pressure. The two methods recorded comparable leak rates, but the new method recorded leak rates with significantly lower measurement uncertainty, statistical variance, and test duration. Utilizing this new method in leak rate quantification, projects will reduce cost and schedule, improve test results, and ease interpretation between data sets.

  1. Atmospheric inversion for cost effective quantification of city CO2 emissions

    NASA Astrophysics Data System (ADS)

    Wu, L.; Broquet, G.; Ciais, P.; Bellassen, V.; Vogel, F.; Chevallier, F.; Xueref-Remy, I.; Wang, Y.

    2015-11-01

    Cities, currently covering only a very small portion (< 3 %) of the world's land surface, directly release to the atmosphere about 44 % of global energy-related CO2, and are associated with 71-76 % of CO2 emissions from global final energy use. Although many cities have set voluntary climate plans, their CO2 emissions are not evaluated by Monitoring, Reporting and Verification (MRV) procedures that play a key role for market- or policy-based mitigation actions. Here we propose a monitoring tool that could support the development of such procedures at the city scale. It is based on an atmospheric inversion method that exploits inventory data and continuous atmospheric CO2 concentration measurements from a network of stations within and around cities to estimate city CO2 emissions. We examine the cost-effectiveness and the performance of such a tool. The instruments presently used to measure CO2 concentrations at research stations are expensive. However, cheaper sensors are currently developed and should be useable for the monitoring of CO2 emissions from a megacity in the near-term. Our assessment of the inversion method is thus based on the use of several types of hypothetical networks, with a range of numbers of sensors sampling at 25 m a.g.l. The study case for this assessment is the monitoring of the emissions of the Paris metropolitan area (~ 12 million inhabitants and 11.4 Tg C emitted in 2010) during the month of January 2011. The performance of the inversion is evaluated in terms of uncertainties in the estimates of total and sectoral CO2 emissions. These uncertainties are compared to a notional ambitious target to diagnose annual total city emissions with an uncertainty of 5 % (2-sigma). We find that, with 10 stations only, which is the typical size of current pilot networks that are deployed in some cities, the uncertainty for the 1-month total city CO2 emissions is significantly reduced by the inversion by ~ 42 % but still corresponds to an annual uncertainty that is two times larger than the target of 5 %. By extending the network from 10 to 70 stations, the inversion can meet this requirement. As for major sectoral CO2 emissions, the uncertainties in the inverted emissions using 70 stations are reduced significantly over that obtained using 10 stations by 32 % for commercial and residential buildings, by 33 % for road transport and by 18 % for the production of energy by power plants, respectively. With 70 stations, the uncertainties from the inversion become of 15 % 2-sigma annual uncertainty for dispersed building emissions, and 18 % for emissions from road transport and energy production. The inversion performance could be further improved by optimal design of station locations and/or by assimilating additional atmospheric measurements of species that are co-emitted with CO2 by fossil fuel combustion processes with a specific signature from each sector, such as carbon monoxide (CO). Atmospheric inversions based on continuous CO2 measurements from a large number of cheap sensors can thus deliver a valuable quantification tool for the monitoring and/or the verification of city CO2 emissions (baseline) and CO2 emission reductions (commitments).

  2. Uncertainty quantification of voice signal production mechanical model and experimental updating

    NASA Astrophysics Data System (ADS)

    Cataldo, E.; Soize, C.; Sampaio, R.

    2013-11-01

    The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.

  3. Identification and accurate quantification of structurally related peptide impurities in synthetic human C-peptide by liquid chromatography-high resolution mass spectrometry.

    PubMed

    Li, Ming; Josephs, Ralf D; Daireaux, Adeline; Choteau, Tiphaine; Westwood, Steven; Wielgosz, Robert I; Li, Hongmei

    2018-06-04

    Peptides are an increasingly important group of biomarkers and pharmaceuticals. The accurate purity characterization of peptide calibrators is critical for the development of reference measurement systems for laboratory medicine and quality control of pharmaceuticals. The peptides used for these purposes are increasingly produced through peptide synthesis. Various approaches (for example mass balance, amino acid analysis, qNMR, and nitrogen determination) can be applied to accurately value assign the purity of peptide calibrators. However, all purity assessment approaches require a correction for structurally related peptide impurities in order to avoid biases. Liquid chromatography coupled to high resolution mass spectrometry (LC-hrMS) has become the key technique for the identification and accurate quantification of structurally related peptide impurities in intact peptide calibrator materials. In this study, LC-hrMS-based methods were developed and validated in-house for the identification and quantification of structurally related peptide impurities in a synthetic human C-peptide (hCP) material, which served as a study material for an international comparison looking at the competencies of laboratories to perform peptide purity mass fraction assignments. More than 65 impurities were identified, confirmed, and accurately quantified by using LC-hrMS. The total mass fraction of all structurally related peptide impurities in the hCP study material was estimated to be 83.3 mg/g with an associated expanded uncertainty of 3.0 mg/g (k = 2). The calibration hierarchy concept used for the quantification of individual impurities is described in detail. Graphical abstract ᅟ.

  4. Impact of hydrogeological and geomechanical properties on surface uplift at a CO2 injection site: Parameter estimation and uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Newell, P.; Yoon, H.; Martinez, M. J.; Bishop, J. E.; Arnold, B. W.; Bryant, S.

    2013-12-01

    It is essential to couple multiphase flow and geomechanical response in order to predict a consequence of geological storage of CO2. In this study, we estimate key hydrogeologic features to govern the geomechanical response (i.e., surface uplift) at a large-scale CO2 injection project at In Salah, Algeria using the Sierra Toolkit - a multi-physics simulation code developed at Sandia National Laboratories. Importantly, a jointed rock model is used to study the effect of postulated fractures in the injection zone on the surface uplift. The In Salah Gas Project includes an industrial-scale demonstration of CO2 storage in an active gas field where CO2 from natural gas production is being re-injected into a brine-filled portion of the structure downdip of the gas accumulation. The observed data include millimeter scale surface deformations (e.g., uplift) reported in the literature and injection well locations and rate histories provided by the operators. Our preliminary results show that the intrinsic permeability and Biot coefficient of the injection zone are important. Moreover pre-existing fractures within the injection zone affect the uplift significantly. Estimation of additional (i.e., anisotropy ratio) and coupled parameters will help us to develop models, which account for the complex relationship between mechanical integrity and CO2 injection-induced pressure changes. Uncertainty quantification of model predictions will be also performed using various algorithms including null-space Monte Carlo and polynomial-chaos expansion methods. This work will highlight that our coupled reservoir and geomechanical simulations associated with parameter estimation can provide a practical solution for designing operating conditions and understanding subsurface processes associated with the CO2 injection. This work is supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  5. EPRI/NRC-RES fire human reliability analysis guidelines.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Stuart R.; Cooper, Susan E.; Najafi, Bijan

    2010-03-01

    During the 1990s, the Electric Power Research Institute (EPRI) developed methods for fire risk analysis to support its utility members in the preparation of responses to Generic Letter 88-20, Supplement 4, 'Individual Plant Examination - External Events' (IPEEE). This effort produced a Fire Risk Assessment methodology for operations at power that was used by the majority of U.S. nuclear power plants (NPPs) in support of the IPEEE program and several NPPs overseas. Although these methods were acceptable for accomplishing the objectives of the IPEEE, EPRI and the U.S. Nuclear Regulatory Commission (NRC) recognized that they required upgrades to support currentmore » requirements for risk-informed, performance-based (RI/PB) applications. In 2001, EPRI and the USNRC's Office of Nuclear Regulatory Research (RES) embarked on a cooperative project to improve the state-of-the-art in fire risk assessment to support a new risk-informed environment in fire protection. This project produced a consensus document, NUREG/CR-6850 (EPRI 1011989), entitled 'Fire PRA Methodology for Nuclear Power Facilities' which addressed fire risk for at power operations. NUREG/CR-6850 developed high level guidance on the process for identification and inclusion of human failure events (HFEs) into the fire PRA (FPRA), and a methodology for assigning quantitative screening values to these HFEs. It outlined the initial considerations of performance shaping factors (PSFs) and related fire effects that may need to be addressed in developing best-estimate human error probabilities (HEPs). However, NUREG/CR-6850 did not describe a methodology to develop best-estimate HEPs given the PSFs and the fire-related effects. In 2007, EPRI and RES embarked on another cooperative project to develop explicit guidance for estimating HEPs for human failure events under fire generated conditions, building upon existing human reliability analysis (HRA) methods. This document provides a methodology and guidance for conducting a fire HRA. This process includes identification and definition of post-fire human failure events, qualitative analysis, quantification, recovery, dependency, and uncertainty. This document provides three approaches to quantification: screening, scoping, and detailed HRA. Screening is based on the guidance in NUREG/CR-6850, with some additional guidance for scenarios with long time windows. Scoping is a new approach to quantification developed specifically to support the iterative nature of fire PRA quantification. Scoping is intended to provide less conservative HEPs than screening, but requires fewer resources than a detailed HRA analysis. For detailed HRA quantification, guidance has been developed on how to apply existing methods to assess post-fire fire HEPs.« less

  6. Uncertainty quantification in Rothermel's Model using an efficient sampling method

    Treesearch

    Edwin Jimenez; M. Yousuff Hussaini; Scott L. Goodrick

    2007-01-01

    The purpose of the present work is to quantify parametric uncertainty in Rothermel’s wildland fire spread model (implemented in software such as BehavePlus3 and FARSITE), which is undoubtedly among the most widely used fire spread models in the United States. This model consists of a nonlinear system of equations that relates environmental variables (input parameter...

  7. Approximating prediction uncertainty for random forest regression models

    Treesearch

    John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne

    2016-01-01

    Machine learning approaches such as random forest have increased for the spatial modeling and mapping of continuous variables. Random forest is a non-parametric ensemble approach, and unlike traditional regression approaches there is no direct quantification of prediction error. Understanding prediction uncertainty is important when using model-based continuous maps as...

  8. Uncertainty Quantification for Polynomial Systems via Bernstein Expansions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2012-01-01

    This paper presents a unifying framework to uncertainty quantification for systems having polynomial response metrics that depend on both aleatory and epistemic uncertainties. The approach proposed, which is based on the Bernstein expansions of polynomials, enables bounding the range of moments and failure probabilities of response metrics as well as finding supersets of the extreme epistemic realizations where the limits of such ranges occur. These bounds and supersets, whose analytical structure renders them free of approximation error, can be made arbitrarily tight with additional computational effort. Furthermore, this framework enables determining the importance of particular uncertain parameters according to the extent to which they affect the first two moments of response metrics and failure probabilities. This analysis enables determining the parameters that should be considered uncertain as well as those that can be assumed to be constants without incurring significant error. The analytical nature of the approach eliminates the numerical error that characterizes the sampling-based techniques commonly used to propagate aleatory uncertainties as well as the possibility of under predicting the range of the statistic of interest that may result from searching for the best- and worstcase epistemic values via nonlinear optimization or sampling.

  9. Physical Uncertainty Bounds (PUB)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaughan, Diane Elizabeth; Preston, Dean L.

    2015-03-19

    This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switchingmore » out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.« less

  10. Impact of Nuclear Data Uncertainties on Advanced Fuel Cycles and their Irradiated Fuel - a Comparison between Libraries

    NASA Astrophysics Data System (ADS)

    Díez, C. J.; Cabellos, O.; Martínez, J. S.

    2014-04-01

    The uncertainties on the isotopic composition throughout the burnup due to the nuclear data uncertainties are analysed. The different sources of uncertainties: decay data, fission yield and cross sections; are propagated individually, and their effect assessed. Two applications are studied: EFIT (an ADS-like reactor) and ESFR (Sodium Fast Reactor). The impact of the uncertainties on cross sections provided by the EAF-2010, SCALE6.1 and COMMARA-2.0 libraries are compared. These Uncertainty Quantification (UQ) studies have been carried out with a Monte Carlo sampling approach implemented in the depletion/activation code ACAB. Such implementation has been improved to overcome depletion/activation problems with variations of the neutron spectrum.

  11. Results for Phase I of the IAEA Coordinated Research Program on HTGR Uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strydom, Gerhard; Bostelmann, Friederike; Yoon, Su Jong

    2015-01-01

    The quantification of uncertainties in design and safety analysis of reactors is today not only broadly accepted, but in many cases became the preferred way to replace traditional conservative analysis for safety and licensing analysis. The use of a more fundamental methodology is also consistent with the reliable high fidelity physics models and robust, efficient, and accurate codes available today. To facilitate uncertainty analysis applications a comprehensive approach and methodology must be developed and applied. High Temperature Gas-cooled Reactors (HTGR) has its own peculiarities, coated particle design, large graphite quantities, different materials and high temperatures that also require other simulationmore » requirements. The IAEA has therefore launched a Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modeling (UAM) in 2013 to study uncertainty propagation specifically in the HTGR analysis chain. Two benchmark problems are defined, with the prismatic design represented by the General Atomics (GA) MHTGR-350 and a 250 MW modular pebble bed design similar to the HTR-PM (INET, China). This report summarizes the contributions of the HTGR Methods Simulation group at Idaho National Laboratory (INL) up to this point of the CRP. The activities at INL have been focused so far on creating the problem specifications for the prismatic design, as well as providing reference solutions for the exercises defined for Phase I. An overview is provided of the HTGR UAM objectives and scope, and the detailed specifications for Exercises I-1, I-2, I-3 and I-4 are also included here for completeness. The main focus of the report is the compilation and discussion of reference results for Phase I (i.e. for input parameters at their nominal or best-estimate values), which is defined as the first step of the uncertainty quantification process. These reference results can be used by other CRP participants for comparison with other codes or their own reference results. The status on the Monte Carlo modeling of the experimental VHTRC facility is also discussed. Reference results were obtained for the neutronics stand-alone cases (Ex. I-1 and Ex. I-2) using the (relatively new) Monte Carlo code Serpent, and comparisons were performed with the more established Monte Carlo codes MCNP and KENO-VI. For the thermal-fluids stand-alone cases (Ex. I-3 and I-4) the commercial CFD code CFX was utilized to obtain reference results that can be compared with lower fidelity tools.« less

  12. Non-invasive evaluation of blood oxygen saturation and hematocrit from T1 and T2 relaxation times: In-vitro validation in fetal blood.

    PubMed

    Portnoy, Sharon; Seed, Mike; Sled, John G; Macgowan, Christopher K

    2017-12-01

    We propose an analytical method for calculating blood hematocrit (Hct) and oxygen saturation (sO 2 ) from measurements of its T 1 and T 2 relaxation times. Through algebraic substitution, established two-compartment relationships describing R1=T1-1 and R2=T2-1 as a function of hematocrit and oxygen saturation were rearranged to solve for Hct and sO 2 in terms of R 1 and R 2 . Resulting solutions for Hct and sO 2 are the roots of cubic polynomials. Feasibility of the method was established by comparison of Hct and sO 2 estimates obtained from relaxometry measurements (at 1.5 Tesla) in cord blood specimens to ground-truth values obtained by blood gas analysis. Monte Carlo simulations were also conducted to assess the effect of T 1 , T 2 measurement uncertainty on precision of Hct and sO 2 estimates. Good agreement was observed between estimated and ground-truth blood properties (bias = 0.01; 95% limits of agreement = ±0.13 for Hct and sO 2 ). Considering the combined effects of biological variability and random measurement noise, we estimate a typical uncertainty of ±0.1 for Hct, sO 2 estimates. Results demonstrate accurate quantification of Hct and sO 2 from T 1 and T 2 . This method is applicable to noninvasive fetal vessel oximetry-an application where existing oximetry devices are unusable or require risky blood-sampling procedures. Magn Reson Med 78:2352-2359, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. Bayesian analysis of rare events

    NASA Astrophysics Data System (ADS)

    Straub, Daniel; Papaioannou, Iason; Betz, Wolfgang

    2016-06-01

    In many areas of engineering and science there is an interest in predicting the probability of rare events, in particular in applications related to safety and security. Increasingly, such predictions are made through computer models of physical systems in an uncertainty quantification framework. Additionally, with advances in IT, monitoring and sensor technology, an increasing amount of data on the performance of the systems is collected. This data can be used to reduce uncertainty, improve the probability estimates and consequently enhance the management of rare events and associated risks. Bayesian analysis is the ideal method to include the data into the probabilistic model. It ensures a consistent probabilistic treatment of uncertainty, which is central in the prediction of rare events, where extrapolation from the domain of observation is common. We present a framework for performing Bayesian updating of rare event probabilities, termed BUS. It is based on a reinterpretation of the classical rejection-sampling approach to Bayesian analysis, which enables the use of established methods for estimating probabilities of rare events. By drawing upon these methods, the framework makes use of their computational efficiency. These methods include the First-Order Reliability Method (FORM), tailored importance sampling (IS) methods and Subset Simulation (SuS). In this contribution, we briefly review these methods in the context of the BUS framework and investigate their applicability to Bayesian analysis of rare events in different settings. We find that, for some applications, FORM can be highly efficient and is surprisingly accurate, enabling Bayesian analysis of rare events with just a few model evaluations. In a general setting, BUS implemented through IS and SuS is more robust and flexible.

  14. Designing the Social Context for Easier Verification, Validation, and Uncertainty Quantification of Earth Science Data

    NASA Astrophysics Data System (ADS)

    Barkstrom, B. R.; Loeb, N. G.; Wielicki, B. A.

    2017-12-01

    Verification, Validation, and Uncertainty Quantification (VVUQ) are key actions that support conclusions based on Earth science data. Communities of data producers and users must undertake VVUQ when they create and use their data. The strategies [S] and tools [T] suggested below come from successful use on two large NASA projects. The first was the Earth Radiation Budget Experiment (ERBE). The second is the investigation of Clouds and the Earth's Radiant Energy System (CERES). [S] 1. Partition the production system into subsystems that deal with data transformations confined to limited space and time scales. Simplify the subsystems to minimize the number of data transformations in each subsystem. [S] 2. Derive algorithms from the fundamental physics and chemistry governing the parameters in each subsystem including those for instrument calibration. [S] 3. Use preliminary uncertainty estimates to detect unexpected discrepancies. Removing these requires diagnostic work as well as development and testing of fixes. [S] 4. Make sure there are adequate resources to support multiple end-to-end reprocessing of all data products. [T] 1. Create file identifiers that accommodate temporal and spatial sequences of data files and subsystem version changes. [T] 2. Create libraries of parameters used in common by different subsystems to reduce errors due to inconsistent values. [T] 3. Maintain a list of action items to record progress on resolving discrepancies. [T] 4. Plan on VVUQ activities that use independent data sources and peer review before distributing and archiving data. The goal of VVUQ is to provide a transparent link between the data and the physics and chemistry governing the measured quantities. The VVUQ effort also involves specialized domain experience and nomenclature. It often requires as much effort as the original system development. ERBE and CERES demonstrated that these strategies and tools can reduce the cost of VVUQ for Earth science data products.

  15. Quantifying the Sources and Sinks of Greenhouse Gases: What Does It Take to Satisfy Scientific and Decision-Making Needs?

    NASA Astrophysics Data System (ADS)

    Davis, K. J.; Keller, K.; Ogle, S. M.; Smith, S.

    2014-12-01

    Changes in the sources and sinks of greenhouse gases (GHGs) are key drivers of anthropogenic climate change. It is hence not surprising that current and emerging U.S. governmental science priorities and programs focused on climate change (e.g. a U.S. Carbon Cycle Science Plan; the U.S. Carbon Cycle Science Program, the U.S. Global Change Research Program, Executive Order 13653 'Preparing the U.S. for the Impacts of Climate Change') all call for an improved understanding of these sources and sinks.. Measurements of the total atmospheric burden of these gases are well established, but measurements of their sources and sinks are difficult to make over spatial and temporal scales that are relevant for scientific and decisionmaking needs. Quantifying the uncertainty in these measurements is particularly challenging. This talk reviews the intersection of the state of knowledge of GHG sources and sinks, focusing in particular on CO2 and CH4, and science and decision-making needs for this information. Different science and decision-making needs require differing levels of uncertainty. A number of high-priority needs (early detection of changes in the Earth system, projections of future climate, support of markets or regulations) often require a high degree of accuracy and/or precision. We will critically evaluate current U.S. planning to documents to infer current perceived needs for GHG source/sink quantification, attempting to translate these needs into quantitative uncertainty metrics. We will compare these perceived needs with the current state of the art of GHG source/sink quantification, including the apparent pattern of systematic differences between so-called "top down" and "bottom-up" flux estimates. This comparison will enable us to identify where needs can be readily satisfied, and where gaps in technology exist. Finally, we will examine what steps could be taken to close existing gaps.

  16. STATISTICAL ANALYSIS OF TANK 18F FLOOR SAMPLE RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, S.

    2010-09-02

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 18F as per the statistical sampling plan developed by Shine [1]. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL [2]. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples resultsmore » [3] to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL{sub 95%}) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 18F. The uncertainty is quantified in this report by an upper 95% confidence limit (UCL{sub 95%}) on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL{sub 95%} was based entirely on the six current scrape sample results (each averaged across three analytical determinations).« less

  17. STATISTICAL ANALYSIS OF TANK 19F FLOOR SAMPLE RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, S.

    2010-09-02

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 19F as per the statistical sampling plan developed by Harris and Shine. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples resultsmore » to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL95%) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current scrape sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 19F. The uncertainty is quantified in this report by an UCL95% on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL95% was based entirely on the six current scrape sample results (each averaged across three analytical determinations).« less

  18. Sustaining an Online, Shared Community Resource for Models, Robust Open source Software Tools and Data for Volcanology - the Vhub Experience

    NASA Astrophysics Data System (ADS)

    Patra, A. K.; Valentine, G. A.; Bursik, M. I.; Connor, C.; Connor, L.; Jones, M.; Simakov, N.; Aghakhani, H.; Jones-Ivey, R.; Kosar, T.; Zhang, B.

    2015-12-01

    Over the last 5 years we have created a community collaboratory Vhub.org [Palma et al, J. App. Volc. 3:2 doi:10.1186/2191-5040-3-2] as a place to find volcanology-related resources, and a venue for users to disseminate tools, teaching resources, data, and an online platform to support collaborative efforts. As the community (current active users > 6000 from an estimated community of comparable size) embeds the tools in the collaboratory into educational and research workflows it became imperative to: a) redesign tools into robust, open source reusable software for online and offline usage/enhancement; b) share large datasets with remote collaborators and other users seamlessly with security; c) support complex workflows for uncertainty analysis, validation and verification and data assimilation with large data. The focus on tool development/redevelopment has been twofold - firstly to use best practices in software engineering and new hardware like multi-core and graphic processing units. Secondly we wish to enhance capabilities to support inverse modeling, uncertainty quantification using large ensembles and design of experiments, calibration, validation. Among software engineering practices we practice are open source facilitating community contributions, modularity and reusability. Our initial targets are four popular tools on Vhub - TITAN2D, TEPHRA2, PUFF and LAVA. Use of tools like these requires many observation driven data sets e.g. digital elevation models of topography, satellite imagery, field observations on deposits etc. These data are often maintained in private repositories that are privately shared by "sneaker-net". As a partial solution to this we tested mechanisms using irods software for online sharing of private data with public metadata and access limits. Finally, we adapted use of workflow engines (e.g. Pegasus) to support the complex data and computing workflows needed for usage like uncertainty quantification for hazard analysis using physical models.

  19. Uncertainty in the global oceanic CO2 uptake induced by wind forcing: quantification and spatial analysis

    NASA Astrophysics Data System (ADS)

    Roobaert, Alizée; Laruelle, Goulven G.; Landschützer, Peter; Regnier, Pierre

    2018-03-01

    The calculation of the air-water CO2 exchange (FCO2) in the ocean not only depends on the gradient in CO2 partial pressure at the air-water interface but also on the parameterization of the gas exchange transfer velocity (k) and the choice of wind product. Here, we present regional and global-scale quantifications of the uncertainty in FCO2 induced by several widely used k formulations and four wind speed data products (CCMP, ERA, NCEP1 and NCEP2). The analysis is performed at a 1° × 1° resolution using the sea surface pCO2 climatology generated by Landschützer et al. (2015a) for the 1991-2011 period, while the regional assessment relies on the segmentation proposed by the Regional Carbon Cycle Assessment and Processes (RECCAP) project. First, we use k formulations derived from the global 14C inventory relying on a quadratic relationship between k and wind speed (k = c ṡ U102; Sweeney et al., 2007; Takahashi et al., 2009; Wanninkhof, 2014), where c is a calibration coefficient and U10 is the wind speed measured 10 m above the surface. Our results show that the range of global FCO2, calculated with these k relationships, diverge by 12 % when using CCMP, ERA or NCEP1. Due to differences in the regional wind patterns, regional discrepancies in FCO2 are more pronounced than global. These global and regional differences significantly increase when using NCEP2 or other k formulations which include earlier relationships (i.e., Wanninkhof, 1992; Wanninkhof et al., 2009) as well as numerous local and regional parameterizations derived experimentally. To minimize uncertainties associated with the choice of wind product, it is possible to recalculate the coefficient c globally (hereafter called c∗) for a given wind product and its spatio-temporal resolution, in order to match the last evaluation of the global k value. We thus performed these recalculations for each wind product at the resolution and time period of our study but the resulting global FCO2 estimates still diverge by 10 %. These results also reveal that the Equatorial Pacific, the North Atlantic and the Southern Ocean are the regions in which the choice of wind product will most strongly affect the estimation of the FCO2, even when using c∗.

  20. Rain radar measurement error estimation using data assimilation in an advection-based nowcasting system

    NASA Astrophysics Data System (ADS)

    Merker, Claire; Ament, Felix; Clemens, Marco

    2017-04-01

    The quantification of measurement uncertainty for rain radar data remains challenging. Radar reflectivity measurements are affected, amongst other things, by calibration errors, noise, blocking and clutter, and attenuation. Their combined impact on measurement accuracy is difficult to quantify due to incomplete process understanding and complex interdependencies. An improved quality assessment of rain radar measurements is of interest for applications both in meteorology and hydrology, for example for precipitation ensemble generation, rainfall runoff simulations, or in data assimilation for numerical weather prediction. Especially a detailed description of the spatial and temporal structure of errors is beneficial in order to make best use of the areal precipitation information provided by radars. Radar precipitation ensembles are one promising approach to represent spatially variable radar measurement errors. We present a method combining ensemble radar precipitation nowcasting with data assimilation to estimate radar measurement uncertainty at each pixel. This combination of ensemble forecast and observation yields a consistent spatial and temporal evolution of the radar error field. We use an advection-based nowcasting method to generate an ensemble reflectivity forecast from initial data of a rain radar network. Subsequently, reflectivity data from single radars is assimilated into the forecast using the Local Ensemble Transform Kalman Filter. The spread of the resulting analysis ensemble provides a flow-dependent, spatially and temporally correlated reflectivity error estimate at each pixel. We will present first case studies that illustrate the method using data from a high-resolution X-band radar network.

  1. Burden of disease caused by local transport in Warsaw, Poland

    PubMed Central

    Tainio, Marko

    2015-01-01

    Transport is a major source of air pollution, noise, injuries and physical activity in the urban environment. The quantification of the health risks and benefits arising from these factors would provide useful information for the planning of cost-effective mitigation actions. In this study we quantified the burden of disease caused by local transport in the city of Warsaw, Poland. The disability-adjusted life-years (DALYs) were estimated for transport related air pollution (particulate matter (PM), nitrogen oxides (NOx), sulfur dioxide (SO2), benzo[a]pyrene (BaP), cadmium, lead and nickel), noise, injuries and physical activity. Exposure to these factors was based on local and international data, and the exposure-response functions (ERFs) were based on published reviews and recommendations. The uncertainties were quantified and propagated with the Monte Carlo method. Local transport generated air pollution, noise and injuries were estimated to cause approximately 58,000 DALYs in the study area. From this burden 44% was due to air pollution and 46% due to noise. Transport related physical activity was estimated to cause a health benefit of 17,000 DALYs. Main quantified uncertainties were related to disability weight for the annoyance (due to noise) and to the ERFs for fine particulate matter (PM2.5) air pollution and walking. The results indicate that the health burden of transport could be mitigated by reducing motorized transport, which causes air pollution and noise, and by encouraging walking and cycling in the study area. PMID:26516622

  2. Predictive uncertainty analysis of plume distribution for geological carbon sequestration using sparse-grid Bayesian method

    NASA Astrophysics Data System (ADS)

    Shi, X.; Zhang, G.

    2013-12-01

    Because of the extensive computational burden, parametric uncertainty analyses are rarely conducted for geological carbon sequestration (GCS) process based multi-phase models. The difficulty of predictive uncertainty analysis for the CO2 plume migration in realistic GCS models is not only due to the spatial distribution of the caprock and reservoir (i.e. heterogeneous model parameters), but also because the GCS optimization estimation problem has multiple local minima due to the complex nonlinear multi-phase (gas and aqueous), and multi-component (water, CO2, salt) transport equations. The geological model built by Doughty and Pruess (2004) for the Frio pilot site (Texas) was selected and assumed to represent the 'true' system, which was composed of seven different facies (geological units) distributed among 10 layers. We chose to calibrate the permeabilities of these facies. Pressure and gas saturation values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. Each simulation of the model lasts about 2 hours. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid stochastic collocation method. This surrogate response surface global optimization algorithm is firstly used to calibrate the model parameters, then prediction uncertainty of the CO2 plume position is quantified due to the propagation from parametric uncertainty in the numerical experiments, which is also compared to the actual plume from the 'true' model. Results prove that the approach is computationally efficient for multi-modal optimization and prediction uncertainty quantification for computationally expensive simulation models. Both our inverse methodology and findings can be broadly applicable to GCS in heterogeneous storage formations.

  3. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less

  4. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    DOE PAGES

    Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; ...

    2016-04-01

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less

  5. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    NASA Astrophysics Data System (ADS)

    Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole

    2016-04-01

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.

  6. Effects of 2D and 3D Error Fields on the SAS Divertor Magnetic Topology

    NASA Astrophysics Data System (ADS)

    Trevisan, G. L.; Lao, L. L.; Strait, E. J.; Guo, H. Y.; Wu, W.; Evans, T. E.

    2016-10-01

    The successful design of plasma-facing components in fusion experiments is of paramount importance in both the operation of future reactors and in the modification of operating machines. Indeed, the Small Angle Slot (SAS) divertor concept, proposed for application on the DIII-D experiment, combines a small incident angle at the plasma strike point with a progressively opening slot, so as to better control heat flux and erosion in high-performance tokamak plasmas. Uncertainty quantification of the error fields expected around the striking point provides additional useful information in both the design and the modeling phases of the new divertor, in part due to the particular geometric requirement of the striking flux surfaces. The presented work involves both 2D and 3D magnetic error field analysis on the SAS strike point carried out using the EFIT code for 2D equilibrium reconstruction, V3POST for vacuum 3D computations and the OMFIT integrated modeling framework for data analysis. An uncertainty in the magnetic probes' signals is found to propagate non-linearly as an uncertainty in the striking point and angle, which can be quantified through statistical analysis to yield robust estimates. Work supported by contracts DE-FG02-95ER54309 and DE-FC02-04ER54698.

  7. Improved uncertainty quantification in nondestructive assay for nonproliferation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, Tom; Croft, Stephen; Jarman, Ken

    2016-12-01

    This paper illustrates methods to improve uncertainty quantification (UQ) for non-destructive assay (NDA) measurements used in nuclear nonproliferation. First, it is shown that current bottom-up UQ applied to calibration data is not always adequate, for three main reasons: (1) Because there are errors in both the predictors and the response, calibration involves a ratio of random quantities, and calibration data sets in NDA usually consist of only a modest number of samples (3–10); therefore, asymptotic approximations involving quantities needed for UQ such as means and variances are often not sufficiently accurate; (2) Common practice overlooks that calibration implies a partitioningmore » of total error into random and systematic error, and (3) In many NDA applications, test items exhibit non-negligible departures in physical properties from calibration items, so model-based adjustments are used, but item-specific bias remains in some data. Therefore, improved bottom-up UQ using calibration data should predict the typical magnitude of item-specific bias, and the suggestion is to do so by including sources of item-specific bias in synthetic calibration data that is generated using a combination of modeling and real calibration data. Second, for measurements of the same nuclear material item by both the facility operator and international inspectors, current empirical (top-down) UQ is described for estimating operator and inspector systematic and random error variance components. A Bayesian alternative is introduced that easily accommodates constraints on variance components, and is more robust than current top-down methods to the underlying measurement error distributions.« less

  8. EBR-II Reactor Physics Benchmark Evaluation Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, Chad L.; Lum, Edward S; Stewart, Ryan

    This report provides a reactor physics benchmark evaluation with associated uncertainty quantification for the critical configuration of the April 1986 Experimental Breeder Reactor II Run 138B core configuration.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lanier, Nicholas Edward

    We have completed implementation of a laser package in LANL's principal AGEX design code, Cassio. Although we have greatly improved our target characterization and uncertainty quantification, we remain unable to satisfactorily simulate the NIF Pleiades data.

  10. Multiscale Modeling and Uncertainty Quantification for Nuclear Fuel Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estep, Donald; El-Azab, Anter; Pernice, Michael

    2017-03-23

    In this project, we will address the challenges associated with constructing high fidelity multiscale models of nuclear fuel performance. We (*) propose a novel approach for coupling mesoscale and macroscale models, (*) devise efficient numerical methods for simulating the coupled system, and (*) devise and analyze effective numerical approaches for error and uncertainty quantification for the coupled multiscale system. As an integral part of the project, we will carry out analysis of the effects of upscaling and downscaling, investigate efficient methods for stochastic sensitivity analysis of the individual macroscale and mesoscale models, and carry out a posteriori error analysis formore » computed results. We will pursue development and implementation of solutions in software used at Idaho National Laboratories on models of interest to the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program.« less

  11. Accuracy requirements and uncertainties in radiotherapy: a report of the International Atomic Energy Agency.

    PubMed

    van der Merwe, Debbie; Van Dyk, Jacob; Healy, Brendan; Zubizarreta, Eduardo; Izewska, Joanna; Mijnheer, Ben; Meghzifene, Ahmed

    2017-01-01

    Radiotherapy technology continues to advance and the expectation of improved outcomes requires greater accuracy in various radiotherapy steps. Different factors affect the overall accuracy of dose delivery. Institutional comprehensive quality assurance (QA) programs should ensure that uncertainties are maintained at acceptable levels. The International Atomic Energy Agency has recently developed a report summarizing the accuracy achievable and the suggested action levels, for each step in the radiotherapy process. Overview of the report: The report seeks to promote awareness and encourage quantification of uncertainties in order to promote safer and more effective patient treatments. The radiotherapy process and the radiobiological and clinical frameworks that define the need for accuracy are depicted. Factors that influence uncertainty are described for a range of techniques, technologies and systems. Methodologies for determining and combining uncertainties are presented, and strategies for reducing uncertainties through QA programs are suggested. The role of quality audits in providing international benchmarking of achievable accuracy and realistic action levels is also discussed. The report concludes with nine general recommendations: (1) Radiotherapy should be applied as accurately as reasonably achievable, technical and biological factors being taken into account. (2) For consistency in prescribing, reporting and recording, recommendations of the International Commission on Radiation Units and Measurements should be implemented. (3) Each institution should determine uncertainties for their treatment procedures. Sample data are tabulated for typical clinical scenarios with estimates of the levels of accuracy that are practically achievable and suggested action levels. (4) Independent dosimetry audits should be performed regularly. (5) Comprehensive quality assurance programs should be in place. (6) Professional staff should be appropriately educated and adequate staffing levels should be maintained. (7) For reporting purposes, uncertainties should be presented. (8) Manufacturers should provide training on all equipment. (9) Research should aid in improving the accuracy of radiotherapy. Some example research projects are suggested.

  12. Rainfall extremes from TRMM data and the Metastatistical Extreme Value Distribution

    NASA Astrophysics Data System (ADS)

    Zorzetto, Enrico; Marani, Marco

    2017-04-01

    A reliable quantification of the probability of weather extremes occurrence is essential for designing resilient water infrastructures and hazard mitigation measures. However, it is increasingly clear that the presence of inter-annual climatic fluctuations determines a substantial long-term variability in the frequency of occurrence of extreme events. This circumstance questions the foundation of the traditional extreme value theory, hinged on stationary Poisson processes or on asymptotic assumptions to derive the Generalized Extreme Value (GEV) distribution. We illustrate here, with application to daily rainfall, a new approach to extreme value analysis, the Metastatistical Extreme Value Distribution (MEVD). The MEVD relaxes the above assumptions and is based on the whole distribution of daily rainfall events, thus allowing optimal use of all available observations. Using a global dataset of rain gauge observations, we show that the MEVD significantly outperforms the Generalized Extreme Value distribution, particularly for long average recurrence intervals and when small samples are available. The latter property suggests MEVD to be particularly suited for applications to satellite rainfall estimates, which only cover two decades, thus making extreme value estimation extremely challenging. Here we apply MEVD to the TRMM TMPA 3B42 product, an 18-year dataset of remotely-sensed daily rainfall providing a quasi-global coverage. Our analyses yield a global scale mapping of daily rainfall extremes and of their distributional tail properties, bridging the existing large gaps in ground-based networks. Finally, we illustrate how our global-scale analysis can provide insight into how properties of local rainfall regimes affect tail estimation uncertainty when using the GEV or MEVD approach. We find a dependence of the estimation uncertainty, for both the GEV- and MEV-based approaches, on the average annual number and on the inter-annual variability of rainy days. In particular, estimation uncertainty decreases 1) as the mean annual number of wet days increases, and 2) as the variability in the number of rainy days, expressed by its coefficient of variation, decreases. We tentatively explain this behavior in terms of the assumptions underlying the two approaches.

  13. Highly-resolved Modeling of Emissions and Concentrations of Carbon Monoxide, Carbon Dioxide, Nitrogen Oxides, and Fine Particulate Matter in Salt Lake City, Utah

    NASA Astrophysics Data System (ADS)

    Mendoza, D. L.; Lin, J. C.; Mitchell, L.; Ehleringer, J. R.

    2014-12-01

    Accurate, high-resolution data on air pollutant emissions and concentrations are needed to understand human exposures and for both policy and pollutant management purposes. An important step in this process is also quantification of uncertainties. We present a spatially explicit and highly resolved emissions inventory for Salt Lake County, Utah, and trace gas concentration estimates for carbon monoxide (CO), carbon dioxide (CO2), nitrogen oxides (NOx) and fine particles (PM2.5) within Salt Lake City. We assess the validity of this approach by comparing measured concentrations against simulated values derived from combining the emissions inventory with an atmospheric model. The emissions inventory for the criteria pollutants was constructed using the 2011 National Emissions Inventory (NEI). The spatial and temporal allocation methods from the Emission Modeling Clearinghouse data set are used to downscale the NEI data from annual to hourly scales and from county-level to 500 m x 500 m resolution. Onroad mobile source emissions were estimated by combining a bottom-up emissions calculation approach for large roadway links with a top-down spatial allocation approach for other roadways. Vehicle activity data for road links were derived from automatic traffic responder data. The emissions inventory for CO2 was obtained from the Hestia emissions data product at an hourly, building, facility, and road link resolution. The AERMOD and CALPUFF dispersion models were used to transport emissions and estimate air pollutant concentrations at an hourly temporal and 500 m x 500 m spatial resolution. Modeled results were compared against measurements from a mobile lab equipped with trace gas measurement equipment traveling on pre-determined routes in the Salt Lake City area. The comparison between both approaches to concentration estimation highlights spatial locations and hours of high variability/uncertainty. Results presented here will inform understanding of variability and uncertainty in emissions and concentrations to better inform future policy. This work will also facilitate the development of a systematic approach to incorporate measurement data and models to better inform estimates of pollutant concentrations that determine the extent to which urban populations are exposed to adverse air quality.

  14. Uncertainty quantification for personalized analyses of human proximal femurs.

    PubMed

    Wille, Hagen; Ruess, Martin; Rank, Ernst; Yosibash, Zohar

    2016-02-29

    Computational models for the personalized analysis of human femurs contain uncertainties in bone material properties and loads, which affect the simulation results. To quantify the influence we developed a probabilistic framework based on polynomial chaos (PC) that propagates stochastic input variables through any computational model. We considered a stochastic E-ρ relationship and a stochastic hip contact force, representing realistic variability of experimental data. Their influence on the prediction of principal strains (ϵ1 and ϵ3) was quantified for one human proximal femur, including sensitivity and reliability analysis. Large variabilities in the principal strain predictions were found in the cortical shell of the femoral neck, with coefficients of variation of ≈40%. Between 60 and 80% of the variance in ϵ1 and ϵ3 are attributable to the uncertainty in the E-ρ relationship, while ≈10% are caused by the load magnitude and 5-30% by the load direction. Principal strain directions were unaffected by material and loading uncertainties. The antero-superior and medial inferior sides of the neck exhibited the largest probabilities for tensile and compression failure, however all were very small (pf<0.001). In summary, uncertainty quantification with PC has been demonstrated to efficiently and accurately describe the influence of very different stochastic inputs, which increases the credibility and explanatory power of personalized analyses of human proximal femurs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Uncertainty-accounting environmental policy and management of water systems.

    PubMed

    Baresel, Christian; Destouni, Georgia

    2007-05-15

    Environmental policies for water quality and ecosystem management do not commonly require explicit stochastic accounts of uncertainty and risk associated with the quantification and prediction of waterborne pollutant loads and abatement effects. In this study, we formulate and investigate a possible environmental policy that does require an explicit stochastic uncertainty account. We compare both the environmental and economic resource allocation performance of such an uncertainty-accounting environmental policy with that of deterministic, risk-prone and risk-averse environmental policies under a range of different hypothetical, yet still possible, scenarios. The comparison indicates that a stochastic uncertainty-accounting policy may perform better than deterministic policies over a range of different scenarios. Even in the absence of reliable site-specific data, reported literature values appear to be useful for such a stochastic account of uncertainty.

  16. Quantification of allyl hexanoate in pineapple beverages and yogurts as a case study to characterise a source of uncertainty in dietary exposure assessment to flavouring substances.

    PubMed

    Raffo, A; D'Aloise, A; Magrì, A D; Leclercq, C

    2012-01-01

    One source of uncertainty in the estimation of dietary exposure to flavouring substances is the uncertainty in the occurrence and concentration levels of these substances naturally present or added to foodstuffs. The aim of this study was to assess the variability of concentration levels of allyl hexanoate, considered as a case study, in two main food categories to which it is often added: pineapple juice-based beverages and yogurts containing pineapple. Thirty-four beverages and 29 yogurts, with pineapple fruit or juice and added flavourings declared as ingredients on the package, were purchased from the local market (in Rome) and analysed. Analytical methods based on the stir bar sorptive extraction (SBSE) technique for the isolation of the target analyte, and on GC-MS analysis for final determination, were developed for the two food categories. In beverages, allyl hexanoate concentrations ranged from less than 0.01 to 16.71 mg l(-1), whereas in yogurts they ranged from 0.02 to 89.41 mg kg(-1). Average concentrations in beverages and yogurts with pineapple as the main fruit ingredient (1.91 mg l(-1) for beverages, 9.61 mg kg(-1) for yogurts) were in fair agreement with average use level data reported from industry surveys for the relevant food categories (4.5 and 6.0 mg kg(-1), respectively). Within the group of yogurts a single product was found to contain a level of allyl hexanoate more than 10-fold higher than the average reported use level. The screening techniques developed by the European Food Safety Authority (EFSA) using use level data provided by industry gave estimates of exposure that were of the same order of magnitude as the estimates obtained for regular consumers who would be loyal to the pineapple yogurt and beverage products containing the highest observed concentration of the substance of interest. In this specific case the uncertainty in the results obtained with the use of standard screening techniques for exposure assessment based on industry reported use levels is low.

  17. DAKOTA Design Analysis Kit for Optimization and Terascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.

    2010-02-24

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-basedmore » methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less

  18. ROMUSE 2.0 User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khuwaileh, Bassam; Turinsky, Paul; Williams, Brian J.

    2016-10-04

    ROMUSE (Reduced Order Modeling Based Uncertainty/Sensitivity Estimator) is an effort within the Consortium for Advanced Simulation of Light water reactors (CASL) to provide an analysis tool to be used in conjunction with reactor core simulators, especially the Virtual Environment for Reactor Applications (VERA). ROMUSE is written in C++ and is currently capable of performing various types of parameters perturbations, uncertainty quantification, surrogate models construction and subspace analysis. Version 2.0 has the capability to interface with DAKOTA which gives ROMUSE access to the various algorithms implemented within DAKOTA. ROMUSE is mainly designed to interface with VERA and the Comprehensive Modeling andmore » Simulation Suite for Nuclear Safety Analysis and Design (SCALE) [1,2,3], however, ROMUSE can interface with any general model (e.g. python and matlab) with Input/Output (I/O) format that follows the Hierarchical Data Format 5 (HDF5). In this brief user manual, the use of ROMUSE will be overviewed and example problems will be presented and briefly discussed. The algorithms provided here range from algorithms inspired by those discussed in Ref.[4] to nuclear-specific algorithms discussed in Ref. [3].« less

  19. Uncertainty characterization and quantification in air pollution models. Application to the ADMS-Urban model.

    NASA Astrophysics Data System (ADS)

    Debry, E.; Malherbe, L.; Schillinger, C.; Bessagnet, B.; Rouil, L.

    2009-04-01

    Evaluation of human exposure to atmospheric pollution usually requires the knowledge of pollutants concentrations in ambient air. In the framework of PAISA project, which studies the influence of socio-economical status on relationships between air pollution and short term health effects, the concentrations of gas and particle pollutants are computed over Strasbourg with the ADMS-Urban model. As for any modeling result, simulated concentrations come with uncertainties which have to be characterized and quantified. There are several sources of uncertainties related to input data and parameters, i.e. fields used to execute the model like meteorological fields, boundary conditions and emissions, related to the model formulation because of incomplete or inaccurate treatment of dynamical and chemical processes, and inherent to the stochastic behavior of atmosphere and human activities [1]. Our aim is here to assess the uncertainties of the simulated concentrations with respect to input data and model parameters. In this scope the first step consisted in bringing out the input data and model parameters that contribute most effectively to space and time variability of predicted concentrations. Concentrations of several pollutants were simulated for two months in winter 2004 and two months in summer 2004 over five areas of Strasbourg. The sensitivity analysis shows the dominating influence of boundary conditions and emissions. Among model parameters, the roughness and Monin-Obukhov lengths appear to have non neglectable local effects. Dry deposition is also an important dynamic process. The second step of the characterization and quantification of uncertainties consists in attributing a probability distribution to each input data and model parameter and in propagating the joint distribution of all data and parameters into the model so as to associate a probability distribution to the modeled concentrations. Several analytical and numerical methods exist to perform an uncertainty analysis. We chose the Monte Carlo method which has already been applied to atmospheric dispersion models [2, 3, 4]. The main advantage of this method is to be insensitive to the number of perturbed parameters but its drawbacks are its computation cost and its slow convergence. In order to speed up this one we used the method of antithetic variable which takes adavantage of the symmetry of probability laws. The air quality model simulations were carried out by the Association for study and watching of Atmospheric Pollution in Alsace (ASPA). The output concentrations distributions can then be updated with a Bayesian method. This work is part of an INERIS Research project also aiming at assessing the uncertainty of the CHIMERE dispersion model used in the Prev'Air forecasting platform (www.prevair.org) in order to deliver more accurate predictions. (1) Rao, K.S. Uncertainty Analysis in Atmospheric Dispersion Modeling, Pure and Applied Geophysics, 2005, 162, 1893-1917. (2) Beekmann, M. and Derognat, C. Monte Carlo uncertainty analysis of a regional-scale transport chemistry model constrained by measurements from the Atmospheric Pollution Over the PAris Area (ESQUIF) campaign, Journal of Geophysical Research, 2003, 108, 8559-8576. (3) Hanna, S.R. and Lu, Z. and Frey, H.C. and Wheeler, N. and Vukovich, J. and Arunachalam, S. and Fernau, M. and Hansen, D.A. Uncertainties in predicted ozone concentrations due to input uncertainties for the UAM-V photochemical grid model applied to the July 1995 OTAG domain, Atmospheric Environment, 2001, 35, 891-903. (4) Romanowicz, R. and Higson, H. and Teasdale, I. Bayesian uncertainty estimation methodology applied to air pollution modelling, Environmetrics, 2000, 11, 351-371.

  20. Application of Nitrogen and Carbon Stable Isotopes (δ15N and δ13C) to Quantify Food Chain Length and Trophic Structure

    PubMed Central

    Perkins, Matthew J.; McDonald, Robbie A.; van Veen, F. J. Frank; Kelly, Simon D.; Rees, Gareth; Bearhop, Stuart

    2014-01-01

    Increasingly, stable isotope ratios of nitrogen (δ15N) and carbon (δ13C) are used to quantify trophic structure, though relatively few studies have tested accuracy of isotopic structural measures. For laboratory-raised and wild-collected plant-invertebrate food chains spanning four trophic levels we estimated nitrogen range (NR) using δ15N, and carbon range (CR) using δ13C, which are used to quantify food chain length and breadth of trophic resources respectively. Across a range of known food chain lengths we examined how NR and CR changed within and between food chains. Our isotopic estimates of structure are robust because they were calculated using resampling procedures that propagate variance in sample means through to quantified uncertainty in final estimates. To identify origins of uncertainty in estimates of NR and CR, we additionally examined variation in discrimination (which is change in δ15N or δ13C from source to consumer) between trophic levels and among food chains. δ15N discrimination showed significant enrichment, while variation in enrichment was species and system specific, ranged broadly (1.4‰ to 3.3‰), and importantly, propagated variation to subsequent estimates of NR. However, NR proved robust to such variation and distinguished food chain length well, though some overlap between longer food chains infers a need for awareness of such limitations. δ13C discrimination was inconsistent; generally no change or small significant enrichment was observed. Consequently, estimates of CR changed little with increasing food chain length, showing the potential utility of δ13C as a tracer of energy pathways. This study serves as a robust test of isotopic quantification of food chain structure, and given global estimates of aquatic food chains approximate four trophic levels while many food chains include invertebrates, our use of four trophic level plant-invertebrate food chains makes our findings relevant for a majority of ecological systems. PMID:24676331

Top